#ceph IRC Log

Index

IRC Log for 2014-07-24

Timestamps are in GMT/BST.

[0:00] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:02] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[0:05] * sz0 (~sz0@94.55.197.185) Quit (Quit: My iMac has gone to sleep. ZZZzzz???)
[0:07] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[0:08] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[0:10] <akshayrao> hi??? earlier I asked if ceph-mon can bind to 0.0.0.0??? aparently there was no way to do this??? I am also unable to get it to bind to a specific port I wanted??? I added an entry under the [mon] section of ceph.conf but it seems to be ignoring the port and using the default port number??? any ideas?
[0:12] * LeaChim (~LeaChim@host86-162-79-167.range86-162.btcentralplus.com) has joined #ceph
[0:12] * sarob (~sarob@67.23.204.226) Quit (Remote host closed the connection)
[0:13] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[0:15] * sz0 (~sz0@94.55.197.185) has joined #ceph
[0:17] * jtaguinerd (~jtaguiner@203.215.116.153) Quit (Quit: Leaving.)
[0:23] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[0:26] * analbeard (~shw@support.memset.com) has joined #ceph
[0:28] * rendar (~I@host102-181-dynamic.7-87-r.retail.telecomitalia.it) Quit ()
[0:32] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[0:36] <gregsfortytwo1> IP (including port) is part of the mon's identity; you can set it to whatever you want when creating the mon, but not afterwards
[0:37] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:38] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[0:39] * lx0 is now known as lxo
[0:47] * ircolle (~Adium@107.107.190.125) Quit (Ping timeout: 480 seconds)
[0:49] * vxitch (~vxi@162.219.4.28) has joined #ceph
[0:49] * JoeGruher (~JoeGruher@134.134.137.73) has joined #ceph
[0:49] <vxitch> hi, i stood up a ceph firefly cluster, one admin node, two other nodes (both have mon and osd)
[0:49] <vxitch> ceph health reports the following: HEALTH_WARN 192 pgs degraded; 192 pgs stuck unclean
[0:49] <vxitch> what can i do about this? what does it mean?
[0:50] * seapasulli (~seapasull@95.85.33.150) Quit (Quit: leaving)
[0:50] <vxitch> ive been reading the docs but can't find anything relating to it
[0:55] <JoeGruher> hi folks... i have some Ubuntu 14.04 (Trusty) clients that don't have Internet access and I need to install Ceph - can anyone provide some guidance? Can I just download and copy over packages from somewhere? These nodes will not be OSDs or MONs, basically they just need to be able to attach RBDs.
[0:57] <Serbitar> you could do that
[0:57] <Serbitar> or set up a local mirror of the repositories so theyu can get the packages
[0:57] <Serbitar> or an apt-cacher-ng setup
[0:57] * akshayrao (~akshayrao@50-197-184-177-static.hfc.comcastbusiness.net) Quit (Quit: akshayrao)
[0:59] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:59] <JoeGruher> where would i pull the packages from?
[1:02] * LeaChim (~LeaChim@host86-162-79-167.range86-162.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[1:02] * garphy is now known as garphy`aw
[1:03] <vxitch> can anyone tell me how to get more info on the degraded/unclean status of my cluster?
[1:07] <Serbitar> http://ceph.com/docs/master/install/get-packages/
[1:07] <Serbitar> im assuming these are teh detils
[1:12] * sarob (~sarob@67.23.204.226) has joined #ceph
[1:14] * vxitch (~vxi@162.219.4.28) has left #ceph
[1:14] * andelhie_ (~Gamekille@128-107-239-236.cisco.com) Quit (Quit: This computer has gone to sleep)
[1:15] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[1:24] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[1:26] * zerick (~eocrospom@190.187.21.53) Quit (Read error: Connection reset by peer)
[1:27] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[1:29] * sarob (~sarob@67.23.204.226) Quit (Remote host closed the connection)
[1:33] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[1:39] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[1:42] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[1:48] * ircolle (~Adium@166.170.43.195) has joined #ceph
[1:49] <JoeGruher> so I pulled all the Trusty 0.81.1 debs from http://ceph.com/debian-firefly/pool/main/c/ceph/, but when I run dpkg -i against them I get dependency errors on packages that are present, like i get an error for ceph-common when i have ceph-common_0.80.1-1trusty_amd64.deb... anyone know the right way to do this?
[1:57] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[1:57] <Serbitar> dpkg -i *ceph*.deb all the packages in one command
[1:59] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:00] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:00] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:00] * Pedras (~Adium@216.207.42.129) Quit (Ping timeout: 480 seconds)
[2:03] * ircolle (~Adium@166.170.43.195) Quit (Ping timeout: 480 seconds)
[2:04] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[2:05] * Cube (~Cube@66-87-130-88.pools.spcsdns.net) Quit (Quit: Leaving.)
[2:11] * adamcrume (~quassel@2601:9:6680:47:fd25:e9f2:7265:ab4f) Quit (Remote host closed the connection)
[2:14] * JoeGruher (~JoeGruher@134.134.137.73) Quit ()
[2:17] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[2:18] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[2:20] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:34] * KaZeR (~kazer@64.201.252.132) Quit (Remote host closed the connection)
[2:34] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:35] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[2:40] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[2:41] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[2:47] * The_Bishop_ (~bishop@e181117185.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[2:48] * The_Bishop_ (~bishop@e181117185.adsl.alicedsl.de) has joined #ceph
[2:51] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[2:54] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[2:55] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[2:55] * zack_dolby (~textual@em111-188-52-12.pool.e-mobile.ne.jp) has joined #ceph
[2:57] * jtaguinerd (~jtaguiner@103.14.60.253) has joined #ceph
[2:57] * zack_dol_ (~textual@e0109-49-132-45-244.uqwimax.jp) has joined #ceph
[3:03] * zack_dolby (~textual@em111-188-52-12.pool.e-mobile.ne.jp) Quit (Ping timeout: 480 seconds)
[3:08] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[3:09] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:10] * narb (~Jeff@38.99.52.10) Quit (Quit: narb)
[3:14] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:16] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:16] * Gamekiller77 (~Gamekille@c-24-6-85-12.hsd1.ca.comcast.net) has joined #ceph
[3:31] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[3:32] * lx0 is now known as lxo
[3:38] * Gamekiller77 (~Gamekille@c-24-6-85-12.hsd1.ca.comcast.net) Quit (Quit: This computer has gone to sleep)
[3:40] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Quit: Computer has gone to sleep.)
[3:43] <tchmnkyz> \quit
[3:43] * tchmnkyz (~jeremy@0001638b.user.oftc.net) Quit (Quit: leaving)
[3:44] * tchmnkyz (tchmnkyz@0001638b.user.oftc.net) has joined #ceph
[3:47] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[3:48] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[3:52] * The_Bishop (~bishop@e181116103.adsl.alicedsl.de) has joined #ceph
[3:58] * The_Bishop_ (~bishop@e181117185.adsl.alicedsl.de) Quit (Read error: Operation timed out)
[3:58] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[4:05] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[4:05] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit ()
[4:07] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[4:07] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) has joined #ceph
[4:08] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[4:09] * akshayrao (~akshayrao@76.126.209.107) has joined #ceph
[4:13] * bandrus1 (~Adium@216.57.72.205) Quit (Quit: Leaving.)
[4:19] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[4:24] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:30] * JayJ (~jayj@pool-96-233-113-153.bstnma.fios.verizon.net) Quit (Quit: Computer has gone to sleep.)
[4:33] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[4:35] * zhaochao (~zhaochao@106.38.204.67) has joined #ceph
[4:40] * haomaiwa_ (~haomaiwan@118.186.129.94) Quit (Ping timeout: 480 seconds)
[4:40] * bkopilov (~bkopilov@213.57.17.40) Quit (Ping timeout: 480 seconds)
[4:45] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[4:48] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit ()
[4:48] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[4:51] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[5:00] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Remote host closed the connection)
[5:02] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (Remote host closed the connection)
[5:26] * dgarcia (~dgarcia@50-73-137-146-ip-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[5:31] * lucas1 (~Thunderbi@222.240.148.130) Quit (Read error: Connection reset by peer)
[5:31] * akshayrao (~akshayrao@76.126.209.107) Quit (Quit: akshayrao)
[5:36] * wschulze (~wschulze@cpe-68-174-87-52.nyc.res.rr.com) has joined #ceph
[5:36] * Vacum (~vovo@88.130.214.108) has joined #ceph
[5:40] * jtaguinerd (~jtaguiner@103.14.60.253) Quit (Quit: Leaving.)
[5:41] * jtaguinerd (~jtaguiner@103.14.60.253) has joined #ceph
[5:42] * jtaguinerd (~jtaguiner@103.14.60.253) Quit ()
[5:43] * Vacum_ (~vovo@88.130.209.205) Quit (Ping timeout: 480 seconds)
[5:51] <lupu> \clear
[5:55] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[5:57] * theanalyst (~abhi@117.96.13.143) has joined #ceph
[6:00] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[6:00] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[6:03] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[6:06] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[6:15] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[6:26] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[6:31] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[6:31] * wschulze (~wschulze@cpe-68-174-87-52.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:37] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[6:42] * theanalyst (~abhi@117.96.13.143) Quit (Ping timeout: 480 seconds)
[6:43] * theanalyst (~abhi@49.32.0.115) has joined #ceph
[6:45] <romxero> \clear
[6:59] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[7:06] * dgarcia (~dgarcia@50-73-137-146-ip-static.hfc.comcastbusiness.net) has joined #ceph
[7:12] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[7:13] * haomaiwang (~haomaiwan@106.38.204.62) has joined #ceph
[7:15] * laurie (~laurie@195.50.209.94) has joined #ceph
[7:22] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[7:22] * Pedras (~Adium@50.185.218.255) has joined #ceph
[7:24] * Nats__ (~Nats@2001:8000:200c:0:75c2:4c57:c60f:148f) has joined #ceph
[7:26] <bens> \poop
[7:31] * __NiC (~kristian@aeryn.ronningen.no) Quit (Ping timeout: 480 seconds)
[7:31] * Nats_ (~Nats@2001:8000:200c:0:e480:7cc3:38bd:9b7b) Quit (Ping timeout: 480 seconds)
[7:32] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[7:33] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[7:40] * _NiC (~kristian@aeryn.ronningen.no) has joined #ceph
[7:40] * rdas (~rdas@121.244.87.115) has joined #ceph
[7:45] * dgarcia (~dgarcia@50-73-137-146-ip-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[7:47] * michalefty (~micha@p20030071CE71B355D80025B0A1B87A79.dip0.t-ipconnect.de) has joined #ceph
[7:53] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[8:01] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[8:02] * dgarcia (~dgarcia@50-73-137-146-ip-static.hfc.comcastbusiness.net) has joined #ceph
[8:03] * ikrstic (~ikrstic@178-223-50-75.dynamic.isp.telekom.rs) has joined #ceph
[8:07] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:11] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[8:13] * thb (~me@port-15348.pppoe.wtnet.de) has joined #ceph
[8:14] * thb is now known as Guest3688
[8:14] * Guest3688 is now known as thb
[8:14] * dlan_ (~dennis@116.228.88.131) has joined #ceph
[8:16] * dlan (~dennis@116.228.88.131) Quit (Ping timeout: 480 seconds)
[8:36] * laurie (~laurie@195.50.209.94) Quit ()
[8:42] * oomkiller (oomkiller@d.clients.kiwiirc.com) has joined #ceph
[8:42] <oomkiller> If I plan to use only ssds as osds with btrfs, what should I use for journal?
[8:43] * cok (~chk@2a02:2350:18:1012:5516:5f4d:a6e3:9a3e) has joined #ceph
[8:45] <Kioob`Taff> oomkiller: if I were you, I will use a dedicated partition, on same SSD drives
[8:45] * jtaguinerd (~jtaguiner@203.115.183.18) has joined #ceph
[8:47] <oomkiller> Kioob`Taff what would be the benefit of an dedicated partition?
[8:51] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[8:51] <Kioob`Taff> for me, avoid some free space searching, or flushes from the FS
[8:53] * rendar (~I@87.19.182.38) has joined #ceph
[8:58] * hyperbaba (~hyperbaba@private.neobee.net) has joined #ceph
[9:03] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[9:03] * lucas1 (~Thunderbi@222.240.148.130) Quit (Quit: lucas1)
[9:04] <oomkiller> Kioob`Taff but then I don't have trim anymore, have I?
[9:06] * andreask (~andreask@gw2.cgn3.hosteurope.de) has joined #ceph
[9:06] * ChanServ sets mode +v andreask
[9:07] * andreask (~andreask@gw2.cgn3.hosteurope.de) has left #ceph
[9:10] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[9:11] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[9:11] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[9:12] <kanagaraj> [ceph_deploy.osd][ERROR ] IOError: [Errno 2] No such file or directory: '/var/lib/ceph/bootstrap-osd/ceph.keyring' is thrown when i run 'ceph-deploy osd prepare'
[9:12] <kanagaraj> any idea what could be the issue?
[9:16] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[9:18] <Kioob`Taff> oomkiller: not sure that trim is still usefull, and the journal partition will be very short (between 4 and 10 GB no ?)
[9:18] <Kioob`Taff> kanagaraj: to handle that, I had to copy the /var/lib/ceph/bootstrap-osd/ceph.keyring file from my first node
[9:19] <Kioob`Taff> (but I was not using ceph-deploy)
[9:21] <kanagaraj> Kioob`Taff, thanks, let me try this
[9:22] * longguang (~chatzilla@123.126.33.253) has joined #ceph
[9:23] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[9:27] <kanagaraj> Kioob`Taff, it worked, thanks
[9:28] * lucas1 (~Thunderbi@222.240.148.130) has joined #ceph
[9:29] <Kioob`Taff> ;)
[9:33] <romxero> has there been testing with ceph on different architectures other than arm and x64_86
[9:33] <romxero> ?
[9:35] * lucas1 (~Thunderbi@222.240.148.130) Quit (Quit: lucas1)
[9:35] * analbeard (~shw@host86-155-107-195.range86-155.btcentralplus.com) has joined #ceph
[9:35] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:41] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:41] * garphy`aw is now known as garphy
[9:42] * thomnico (~thomnico@2a01:e35:8b41:120:74ff:95f7:1bc6:596d) has joined #ceph
[9:45] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[9:52] * allsystemsarego (~allsystem@79.115.170.45) has joined #ceph
[9:54] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[9:54] <oomkiller> In the hardware recommandations for OSD Hosts it is recommended to have 1 GHz per OSD, does hyperthreading from intel count or do I need to substract that?
[9:56] * steki (~steki@79-101-70-190.dynamic.isp.telekom.rs) has joined #ceph
[9:57] * lcavassa (~lcavassa@89.184.114.246) Quit (Remote host closed the connection)
[9:57] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[9:58] * analbeard (~shw@host86-155-107-195.range86-155.btcentralplus.com) Quit (Quit: Leaving.)
[9:58] <darkfader> oomkiller: idk how it was intended, in my vps hosts i count 30% for the hyperthreads
[9:58] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[9:59] <darkfader> since the osd will be nicely io-bound and shifting between disk and network i'd expect hyperthreading to work well during a rebuild
[10:02] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[10:06] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[10:08] * lincolnb (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) Quit (Read error: Operation timed out)
[10:08] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:17] * lincolnb (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) has joined #ceph
[10:24] <oomkiller> Kioob`Taff wouldn't the unformated partition at the start of the ssd then used for wear leveling, would it?
[10:24] <oomkiller> darkfader: ok I guess I will then get it used, it seems as Supermicro sells production ready ceph osd nodes they count it too
[10:25] * rdas (~rdas@121.244.87.115) has joined #ceph
[10:30] <Kioob`Taff> for me, the position of the partition doesn't affect wear leveling
[10:32] <Kioob`Taff> next week I will try ??ASRock Rack 1U12LW-C2750?? as ceph OSD nodes (http://www.asrockrack.com/general/productdetail.asp?Model=1U12LW-C2750 ), don't know if the Intel Avoton will be enough powerfull.
[10:35] <oomkiller> Kioob`Taff do I see that correct that for every disk switch you have to shutdown and open the server?
[10:38] <Kioob`Taff> I understand like that too
[10:38] <Kioob`Taff> With Ceph you can shutdown a full node for that. It will ??just?? imply recovery
[10:39] <Sysadmin88> 12 disks = lots of recovery
[10:39] * sage (~quassel@cpe-23-242-158-79.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[10:39] <Kioob`Taff> Yes :) So, I will try
[10:40] <oomkiller> you won't have to recover all only the one disk failed and the others have to catch up which is very fast
[10:40] * theanalyst (~abhi@0001c1e3.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:41] <oomkiller> but nethertheless I find it pretty unconvienent
[10:41] * jordanP (~jordan@185.23.92.11) has joined #ceph
[10:41] <oomkiller> *inconvenient
[10:41] <tnt_> I'm kind of wondering : How do you guys ensure network redundancy when using 10G links ? Currently I'm using 2 GigE link per server with LACP and connected to 2 different switches across the same stack.
[10:41] * RameshN (~rnachimu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[10:42] * sage (~quassel@cpe-172-248-35-102.socal.res.rr.com) has joined #ceph
[10:42] * ChanServ sets mode +o sage
[10:43] <Serbitar> exactly the same way
[10:43] * theanalyst (~abhi@49.32.0.115) has joined #ceph
[10:43] <Serbitar> except s/1G/10G/
[10:43] <Kioob`Taff> I use 10G links in production, but without redundancy...
[10:43] <Serbitar> what happens if you have a switch failure?
[10:43] <tnt_> Serbitar: what kind of switch do you use ?
[10:44] <Kioob`Taff> Well, the ceph node is down
[10:44] <Serbitar> tnt_: im getting some IBM switches,they seem nice but i wasnt the one doing the decision making
[10:44] <Kioob`Taff> (but I have only *one* ceph node per switch...)
[10:44] <oomkiller> do you use seperated networks for public and cluster network?
[10:45] <Kioob`Taff> oomkiller: not physically.
[10:45] * cok (~chk@2a02:2350:18:1012:5516:5f4d:a6e3:9a3e) Quit (Quit: Leaving.)
[10:45] * theanalyst (~abhi@49.32.0.115) Quit (Remote host closed the connection)
[10:45] <Serbitar> Kioob`Taff: ah and i assume you have redundancy for the top level switches?
[10:45] <tnt_> personally, Currently I don't. Because then I'd need 4 NICs just to have redudancy on both ... and that's a lot of NICs and cables and network ports.
[10:45] * theanalyst (~abhi@49.32.0.115) has joined #ceph
[10:46] <Serbitar> dual port 10G nics are common
[10:46] <tnt_> Sure. But then you need 2. And that's like 4 cables. + 1 cable for mgmt and 1 cable for the ilom that's 6 network cable coming out of each server :p
[10:47] <Serbitar> no, im sure that you want more than that
[10:47] <Serbitar> :P
[10:47] * darkfader has 4 cables per node (3xgige, 1xib) and it's really not great
[10:47] <darkfader> but cheaper than buying ucs boxes :)
[10:51] <Kioob`Taff> Serbitar: yes, but it's my provider which handle that
[10:52] <cookednoodles> err why 1 cable for mgmt and 1 for lom ? :/
[10:52] <cookednoodles> vlans
[10:57] * garphy is now known as garphy`aw
[11:02] * zack_dol_ (~textual@e0109-49-132-45-244.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:06] * thomnico (~thomnico@2a01:e35:8b41:120:74ff:95f7:1bc6:596d) Quit (Ping timeout: 480 seconds)
[11:06] * steki (~steki@79-101-70-190.dynamic.isp.telekom.rs) Quit (Remote host closed the connection)
[11:06] * steki (~steki@91.195.39.5) has joined #ceph
[11:06] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[11:13] * lucas1 (~Thunderbi@222.247.57.50) Quit (Quit: lucas1)
[11:13] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[11:16] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (Ping timeout: 480 seconds)
[11:19] * rdas (~rdas@121.244.87.115) has joined #ceph
[11:21] * theanalyst (~abhi@49.32.0.115) Quit (Ping timeout: 480 seconds)
[11:21] * Sysadmin88 (~IceChat77@94.4.20.0) Quit (Read error: Connection reset by peer)
[11:34] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[11:45] * humbolt (~elias@chello080109074153.4.15.vie.surfer.at) has joined #ceph
[11:50] * thomnico (~thomnico@15.203.178.35) has joined #ceph
[11:50] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) Quit (Read error: Operation timed out)
[11:50] * mongo (~gdahlman@voyage.voipnw.net) Quit (Read error: Operation timed out)
[11:51] * RameshN (~rnachimu@121.244.87.117) Quit (Quit: Quit)
[11:51] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[11:53] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) has joined #ceph
[11:53] * mongo (~gdahlman@voyage.voipnw.net) has joined #ceph
[11:53] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) has joined #ceph
[11:57] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[12:02] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[12:16] * thomnico (~thomnico@15.203.178.35) Quit (Ping timeout: 480 seconds)
[12:32] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[12:41] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[12:42] * shang (~ShangWu@59.188.42.3) has joined #ceph
[12:47] * theanalyst (~abhi@49.32.0.103) has joined #ceph
[12:50] * fireD (~fireD@93-139-204-185.adsl.net.t-com.hr) has joined #ceph
[12:52] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[13:03] * shang (~ShangWu@59.188.42.3) Quit (Ping timeout: 480 seconds)
[13:11] * RameshN (~rnachimu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:15] * vbellur (~vijay@c-76-19-134-77.hsd1.ma.comcast.net) Quit (Quit: Leaving.)
[13:23] * humbolt1 (~elias@chello080109074153.4.15.vie.surfer.at) has joined #ceph
[13:30] * humbolt (~elias@chello080109074153.4.15.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[13:34] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:35] * jtang_ (~jtang@80.111.83.231) Quit (Ping timeout: 480 seconds)
[13:38] * thomnico (~thomnico@15.203.178.35) has joined #ceph
[13:42] * dmsimard_away is now known as dmsimard
[13:43] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[13:44] * drankis (~drankis__@89.111.13.198) has joined #ceph
[13:47] * zhaochao (~zhaochao@106.38.204.67) has left #ceph
[13:57] * jtaguinerd (~jtaguiner@203.115.183.18) Quit (Quit: Leaving.)
[13:58] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[14:02] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[14:05] * garphy`aw is now known as garphy
[14:06] * RameshN (~rnachimu@121.244.87.117) has joined #ceph
[14:17] * rdas (~rdas@121.244.87.115) Quit (Read error: Operation timed out)
[14:25] * steki (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[14:25] * wschulze (~wschulze@cpe-68-174-87-52.nyc.res.rr.com) has joined #ceph
[14:25] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[14:27] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[14:28] * oms101 (~oms101@p20030057EA009B00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[14:37] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[14:39] * wschulze (~wschulze@cpe-68-174-87-52.nyc.res.rr.com) Quit (Quit: Leaving.)
[14:39] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[14:39] <oms101> has much been done in support for systemd with ceph?
[14:40] * sz0 (~sz0@94.55.197.185) Quit ()
[14:42] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:47] * danieljh_ (~daniel@HSI-KBW-046-005-197-128.hsi8.kabel-badenwuerttemberg.de) Quit (Quit: Lost terminal)
[14:47] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[14:49] <PVi1> Hi all, this is my iostat output from rados bench. I have 2 10gb journals on 100gb ssd, but I am unable to reach write speed higher than 80MB/per ssd with two journals on it. If I use just one journal I can reach 180MB/s!
[14:49] <PVi1> My iostat output:
[14:49] <PVi1> avg-cpu: %user %nice %system %iowait %steal %idle
[14:49] <PVi1> 2,67 0,00 7,10 10,69 0,00 79,53
[14:49] <PVi1> Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
[14:49] <PVi1> sdh 0,00 0,00 0,00 154,00 0,00 69768,00 906,08 6,70 43,82 0,00 43,82 6,36 98,00
[14:49] <PVi1> sdh1 0,00 0,00 0,00 77,00 0,00 36936,00 959,38 3,71 48,21 0,00 48,21 3,79 29,20
[14:49] <PVi1> sdh2 0,00 0,00 0,00 68,00 0,00 32832,00 965,65 1,52 22,41 0,00 22,41 2,24 15,20
[14:49] <PVi1> sdh3 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
[14:49] <PVi1> sdh4 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00 0,00
[14:49] <PVi1> Why is Device util higher than sum of utils per partiotion?
[14:51] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[14:53] * garphy is now known as garphy`aw
[14:54] * theanalyst (~abhi@0001c1e3.user.oftc.net) Quit (Read error: Operation timed out)
[14:55] <burley> %util is meaningless for SSDs
[14:58] * JayJ (~jayj@157.130.21.226) has joined #ceph
[14:59] * markbby (~Adium@168.94.245.2) has joined #ceph
[15:00] * RameshN (~rnachimu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[15:01] * thomnico (~thomnico@15.203.178.35) Quit (Read error: Operation timed out)
[15:02] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[15:05] * joerocklin_ (~joe@cpe-65-185-149-56.woh.res.rr.com) Quit (Quit: ZNC - http://znc.in)
[15:07] * sjm (~sjm@143.115.158.238) has joined #ceph
[15:07] * theanalyst (~abhi@117.96.13.143) has joined #ceph
[15:09] * fabioFVZ (~fabiofvz@213.187.20.119) has joined #ceph
[15:09] * fabioFVZ (~fabiofvz@213.187.20.119) Quit ()
[15:09] * fabioFVZ (~fabiofvz@213.187.20.119) has joined #ceph
[15:10] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) has joined #ceph
[15:10] * theanalyst (~abhi@117.96.13.143) Quit (Remote host closed the connection)
[15:10] * theanalyst (~abhi@117.96.13.143) has joined #ceph
[15:21] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[15:23] * JayJ (~jayj@157.130.21.226) Quit (Remote host closed the connection)
[15:24] * JayJ (~jayj@157.130.21.226) has joined #ceph
[15:34] * tdasilva (~quassel@nat-pool-bos-u.redhat.com) has joined #ceph
[15:37] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[15:40] * michalefty (~micha@p20030071CE71B355D80025B0A1B87A79.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[15:44] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[15:49] * kapil (~ksharma@2620:113:80c0:5::2222) has joined #ceph
[15:51] * markbby1 (~Adium@168.94.245.3) has joined #ceph
[15:53] * markbby2 (~Adium@168.94.245.1) has joined #ceph
[15:53] * markbby1 (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[15:56] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[15:56] * markbby2 (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[15:57] * thiago (~thiago@201.76.188.234) has joined #ceph
[15:58] * drankis (~drankis__@89.111.13.198) Quit (Ping timeout: 480 seconds)
[15:58] * cok (~chk@2a02:2350:18:1012:152c:71c4:b976:4ad9) has joined #ceph
[15:59] * theanalyst (~abhi@0001c1e3.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:00] * markbby (~Adium@168.94.245.1) has joined #ceph
[16:01] * hyperbaba (~hyperbaba@private.neobee.net) Quit (Ping timeout: 480 seconds)
[16:03] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[16:04] * markbby (~Adium@168.94.245.1) has joined #ceph
[16:04] * NetWeaver (~NetWeaver@c-24-2-152-119.hsd1.ct.comcast.net) has joined #ceph
[16:05] <thiago> HI, I'm from Brazil. I'm doing tests with CephFS. Does anyone here uses CephFS in a production environment?
[16:08] <thiago> This FAQ mentions that CephFS is not ready for use in a production environment: http://wiki.ceph.com/FAQs/Is_Ceph_Production-Quality% 3F
[16:08] <thiago> On the other hand, it seems that this is just the lack of commercial support for InkTank
[16:10] <thiago> Does anyone here speak Portuguese?
[16:10] <janos_> cephFS will likely get more attention since RedHat acquired inktank, but for now it's "use at your own risk"
[16:12] * markbby (~Adium@168.94.245.1) Quit (Ping timeout: 480 seconds)
[16:14] * simulx2 (~simulx@vpn.expressionanalysis.com) has joined #ceph
[16:15] * simulx (~simulx@vpn.expressionanalysis.com) Quit (Ping timeout: 480 seconds)
[16:16] <thiago> I understand
[16:16] * markbby (~Adium@168.94.245.4) has joined #ceph
[16:17] <thiago> Okay, until this happens I'll testing and learning
[16:17] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[16:18] <janos_> from what people report, cephFS is pretty solid. but it hasn't been cleared for takeoff
[16:21] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[16:22] <NetWeaver> Anyone know about glance/ceph qcow2 vs. raw issues? Trying to understand our best path in IceHouse, I guess a force convert to raw on use?-Or is there a way to use qcow2 as normal?
[16:22] * markbby (~Adium@168.94.245.2) has joined #ceph
[16:23] <thiago> I need to put together a similar scheme to a RAID1 over the network. With concurrent accesses on both sides. I think CephFS will be the solution for me.
[16:23] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[16:24] * markbby (~Adium@168.94.245.2) has joined #ceph
[16:25] <NetWeaver> I should likely ask an openstack channel as well...
[16:25] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[16:28] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[16:28] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[16:33] * sz0 (~sz0@94.55.197.185) has joined #ceph
[16:33] * markbby (~Adium@168.94.245.1) has joined #ceph
[16:34] * sz0 (~sz0@94.55.197.185) Quit ()
[16:36] * markbby1 (~Adium@168.94.245.1) has joined #ceph
[16:40] * NetWeaver (~NetWeaver@c-24-2-152-119.hsd1.ct.comcast.net) has left #ceph
[16:40] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[16:41] * shang (~ShangWu@59.188.42.3) has joined #ceph
[16:41] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) Quit (Quit: Leaving.)
[16:46] <ron-slc> thiago: I've had pretty good success with CephFS, over 4TB of data stored. BUT the main recommendation at this point, don't use some of the fancy features, keep it simple. Which means multiple MDS's in a high-load environment may not work so well.
[16:47] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[16:47] <ron-slc> The only special CephFS feature we use is anchoring specific directory sub-trees to different storage Pool(s). Other that that we only have ONE MDS. And we make NIGHTLY backups.
[16:47] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[16:47] <ron-slc> Haven't lost anything in 2 years, but there's always tomorrow.
[16:47] * markbby (~Adium@168.94.245.1) has joined #ceph
[16:49] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:50] * markbby1 (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[16:53] * madkiss (~madkiss@chello084112124211.20.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[17:02] * joao|lap (~JL@a79-168-5-220.cpe.netcabo.pt) has joined #ceph
[17:02] * ChanServ sets mode +o joao|lap
[17:03] * kanagaraj (~kanagaraj@27.7.17.236) has joined #ceph
[17:07] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) Quit (Ping timeout: 480 seconds)
[17:09] * cok (~chk@2a02:2350:18:1012:152c:71c4:b976:4ad9) Quit (Quit: Leaving.)
[17:12] * jobewan (~jobewan@snapp.centurylink.net) has joined #ceph
[17:14] * lupu (~lupu@86.107.101.214) has left #ceph
[17:15] * ufven (~ufven@130-229-28-120-dhcp.cmm.ki.se) has joined #ceph
[17:16] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[17:17] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[17:18] * Pedras (~Adium@50.185.218.255) has joined #ceph
[17:19] * thomnico (~thomnico@15.203.178.35) has joined #ceph
[17:20] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[17:21] * bkopilov (~bkopilov@213.57.17.91) has joined #ceph
[17:21] * lupu (~lupu@86.107.101.214) has joined #ceph
[17:30] * narb (~Jeff@38.99.52.10) has joined #ceph
[17:30] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[17:32] * joao|lap (~JL@a79-168-5-220.cpe.netcabo.pt) Quit (Ping timeout: 480 seconds)
[17:32] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:34] * thomnico (~thomnico@15.203.178.35) Quit (Ping timeout: 480 seconds)
[17:35] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:37] * markbby1 (~Adium@168.94.245.4) has joined #ceph
[17:37] * markbby (~Adium@168.94.245.1) Quit (Remote host closed the connection)
[17:38] <JayJ> I've SSD and SATA drives in a Ceph cluster. I want to configure a pool with only SSD drives and one with SATA drives. Could anyone point me to any literature to accomplish that please?
[17:38] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[17:38] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:39] <Vacum> JayJ: the official docs have one example for this: http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
[17:41] <JayJ> Vacum: Oops! Should have done better research... Sorry but Thank you for pointing
[17:43] <tnt_> The issues comes when you have both SSD and SATA on the same OSD host ...
[17:45] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[17:46] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit ()
[17:53] * jtaguinerd (~jtaguiner@125.212.121.194) has joined #ceph
[18:01] * Cube (~Cube@12.248.40.138) has joined #ceph
[18:04] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[18:06] * ircolle (~Adium@2601:1:a580:145a:e44f:ca4d:fa11:f3ba) has joined #ceph
[18:06] * ircolle (~Adium@2601:1:a580:145a:e44f:ca4d:fa11:f3ba) Quit ()
[18:06] * fabioFVZ (~fabiofvz@213.187.20.119) Quit ()
[18:07] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[18:08] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[18:10] * cok (~chk@46.30.211.29) has joined #ceph
[18:11] * ircolle (~Adium@2601:1:a580:145a:4113:1ba4:368:43de) has joined #ceph
[18:13] * morse_ (~morse@supercomputing.univpm.it) has joined #ceph
[18:13] * morse (~morse@supercomputing.univpm.it) Quit (Read error: Connection reset by peer)
[18:13] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[18:14] * krystal (~krystal@2a00:1080:804:200:2677:3ff:fece:f200) has joined #ceph
[18:18] <krystal> Hi here ! I was googling about ceph reliability and I was wondering : when you increase your PG count, don't you increase your dataloss probability when loosing multiple osd at time ? Is there any formula out there ?
[18:19] * Gamekiller77 (~Gamekille@128-107-239-234.cisco.com) has joined #ceph
[18:24] * lcavassa (~lcavassa@89.184.114.246) Quit (Quit: Leaving)
[18:26] * tdasilva (~quassel@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:27] <gchristensen> krystal: I don't think so, "We recommend approximately 50-100 placement groups per OSD", "example, for a cluster with 200 OSDs and a pool size of 3 replicas ... estimate number of PGs as (200 * 100) / 3 = 6667 [rounded to] nearest power of 2]: 8192
[18:27] * sarob (~sarob@67.23.204.226) has joined #ceph
[18:28] * sarob (~sarob@67.23.204.226) Quit (Remote host closed the connection)
[18:28] <krystal> yep but it don't sound right especially for small setup
[18:28] <gchristensen> how do you mean?
[18:28] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.yahoo.com) has joined #ceph
[18:29] <gchristensen> what about adding PGs makes you wonder if it reduces redundancy?
[18:29] <krystal> if you have as many osd as PG in a 3 duplicate setup, loosing 3 drive simultaneously will always end up loosing data on the PG pointing to those drive
[18:30] <krystal> increasing PG decrease the quantity of data you risk to loose but increase the chance you'll loose some
[18:31] <krystal> I don't know if I'm clear as my english is very perfectible :)
[18:31] <krystal> ho scuse, I mean as many pg as osd combination in my last sentence
[18:32] <gchristensen> I guess I don't think I'm familiar enough to give you a good answer
[18:32] <krystal> I think you can made a parallel with the birthday attack algorithm
[18:48] * sarob_ (~sarob@67.23.204.226) has joined #ceph
[18:49] * Gamekiller77 (~Gamekille@128-107-239-234.cisco.com) Quit (Ping timeout: 480 seconds)
[18:51] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[18:52] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:52] * thomnico (~thomnico@2a01:e35:8b41:120:68b6:1735:1e6b:b495) has joined #ceph
[18:53] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[18:53] * sarob (~sarob@nat-dip6.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[18:54] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Operation timed out)
[18:55] * shang (~ShangWu@59.188.42.3) Quit (Quit: Ex-Chat)
[18:56] * sigsegv (~sigsegv@188.25.123.201) has joined #ceph
[18:58] * sarob_ (~sarob@67.23.204.226) Quit (Remote host closed the connection)
[18:58] <kanagaraj> ceph-deploy osd activate node1:/var/local/osd0 node2:/var/local/osd1 is hanging
[18:58] * sarob (~sarob@67.23.204.226) has joined #ceph
[18:59] <kanagaraj> and it says "[WARNIN] No data was received after 300 seconds, disconnecting..." , what could be the issue?
[18:59] * andreask (~andreask@178.210.125.106) has joined #ceph
[18:59] * ChanServ sets mode +v andreask
[19:00] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:00] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[19:01] * andreask (~andreask@178.210.125.106) has left #ceph
[19:03] <thiago> Is there a way to configure the Ceph cluster without using ceph-deploy command ?
[19:03] <thiago> In official documentation I only see examples with ceph-deploy
[19:04] * jordanP (~jordan@185.23.92.11) Quit (Quit: Leaving)
[19:04] <PerlStalker> AFAIK, it still can be done manually but that is highly discouraged these days.
[19:04] <alfredodeza> thiago: http://ceph.com/docs/master/install/
[19:05] <alfredodeza> to deploy it manually: http://ceph.com/docs/master/install/#deploy-a-cluster-manually
[19:05] <kanagaraj> after disabling firewall, osd activate is not hanging, but throws '' unable to create symlink /var/lib/ceph/osd/ceph-0 -> /var/local/osd0"
[19:05] <kanagaraj> am i missing something?
[19:06] * sarob (~sarob@67.23.204.226) Quit (Ping timeout: 480 seconds)
[19:06] <thiago> PerlStaker: I use Puppet and is difficult to plan for cluster installation with Puppet and ceph-deploy
[19:07] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Read error: Operation timed out)
[19:07] <dmsimard> thiago: Have a look at both https://github.com/enovance/puppet-ceph and https://github.com/ceph/puppet-ceph
[19:08] * krystal (~krystal@2a00:1080:804:200:2677:3ff:fece:f200) Quit (Ping timeout: 480 seconds)
[19:08] <ircolle> dmsimard - you just beat me to it :-)
[19:08] <dmsimard> ircolle: muahahah
[19:09] <dmsimard> thiago: some people use Chef or Ansible, too.
[19:09] * lalatenduM (~lalatendu@121.244.87.117) Quit (Remote host closed the connection)
[19:15] <thiago> dmsimard: This will be very helpful for me. I'll test them. Thank's
[19:18] * tdasilva (~quassel@nat-pool-bos-u.redhat.com) has joined #ceph
[19:19] * rmoe (~quassel@12.164.168.117) has joined #ceph
[19:24] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[19:27] * qhartman (~qhartman@den.direwolfdigital.com) has joined #ceph
[19:28] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[19:29] * adamcrume (~quassel@2601:9:6680:47:4135:88aa:b17:eeb1) has joined #ceph
[19:33] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[19:38] * jtang_ (~jtang@80.111.83.231) Quit (Ping timeout: 480 seconds)
[19:38] <ganders> hi guys, someone had this type of error when trying to map a rbd on ubuntu or centos?
[19:39] <ganders> ERROR: modinfo: could not find module rbd
[19:39] <ganders> FATAL: Module rbd not found
[19:39] <ganders> rbd: modprobe rbd failed! (256)
[19:39] * thomnico (~thomnico@2a01:e35:8b41:120:68b6:1735:1e6b:b495) Quit (Ping timeout: 480 seconds)
[19:39] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[19:40] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[19:40] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) Quit ()
[19:40] * alram (~alram@38.122.20.226) has joined #ceph
[19:41] * Kioob (~Kioob@2a01:e34:ec0a:c0f0:21e:8cff:fe07:45b6) has joined #ceph
[19:42] * thomnico (~thomnico@2a01:e35:8b41:120:68b6:1735:1e6b:b495) has joined #ceph
[19:42] <Kioob> tnt_: so, with Xen 4.3 from Debian, I can use RBD device with qdisk yes, but I had to rebuild qemu to enable RBD support.
[19:42] <Kioob> thanks for the advice.
[19:49] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[19:49] * Pedras (~Adium@216.207.42.140) has joined #ceph
[19:50] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[19:51] * Gamekiller77 (~Gamekille@128-107-239-235.cisco.com) has joined #ceph
[19:53] <JayJ> dmsimard: Have you used either of the puppet modules? Any luck rate?
[20:00] * krystal (~krystal@2a01:e35:2e5c:bd0:2677:3ff:fece:f200) has joined #ceph
[20:00] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[20:01] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[20:02] * jtang_ (~jtang@80.111.83.231) has joined #ceph
[20:03] * yuriw1 (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[20:04] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) has joined #ceph
[20:05] <dmsimard> JayJ: I personally use a fork of Enovance's module here: https://github.com/TelekomCloud/puppet-ceph/tree/rc/eisbrecher
[20:06] <dmsimard> JayJ: I'm also a developer/core reviewer on https://github.com/ceph/puppet-ceph which uses Openstack's review/CI infrastructure and has strong integration testing in place
[20:06] <dmsimard> Folks from Enovance would eventually like to centralize the effort on the new puppet-ceph but I don't believe it supports every scenario just yet.
[20:07] * diegows (~diegows@190.190.5.238) has joined #ceph
[20:07] <dmsimard> loicd: ping ?
[20:09] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[20:09] <dmsimard> Perhaps loicd knows if there are people using the new puppet-ceph in production
[20:10] * zack_dol_ (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) has joined #ceph
[20:10] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) Quit (Read error: Connection reset by peer)
[20:10] <loicd> dmsimard: I don't know of any
[20:11] <dmsimard> loicd: David Gurtner maybe ?
[20:11] <loicd> not that I know
[20:12] <dmsimard> loicd: Will you be at the summit in November ? :)
[20:13] <dmsimard> I'll try to be there
[20:13] <loicd> I'm not sure if the company will agree to travel expenses. It's about 1.2 euros from my home to the palais des congres ;-)
[20:13] <dmsimard> lol
[20:16] * Gamekiller77 (~Gamekille@128-107-239-235.cisco.com) Quit (Quit: This computer has gone to sleep)
[20:21] <mongo> Any opinions on dedicated cluster networks vs lacp?
[20:21] <Vacum> mongo: both. dedicated cluster network with lacp :)
[20:22] <mongo> that is a lot of ports and additional cards.
[20:22] <mongo> it seems that ceph is a perfect match for lacp.
[20:22] <mongo> plus I doubt these systems can push more than 20Gb/s
[20:22] <Vacum> double 10Gb cards are fine
[20:24] <mongo> I would be tempted to do IB instead, it is cheaper. I'll try LACP.
[20:24] * Gamekiller77 (~Gamekille@128-107-239-234.cisco.com) has joined #ceph
[20:24] <Vacum> or you use quad 1GBe with lacp, could be cheaper, even including the switchports. and of course the cheaper cables :)
[20:25] <mongo> 10GbE is cheap and the systems I am using have two SFP+ ports, the replication lag on 1GbE is pretty huge.
[20:26] <mongo> 1M SFP+ cables cost me $18 a pop, that is not bad at all.
[20:27] <Vacum> you mean 10GSFP+Cu (direct attached twinax)?
[20:27] <mongo> yep, passive copper.
[20:27] <Pedras> mongo: where do you find these at $18
[20:27] <Vacum> passive copper? direct attached twinax is a pretty active cable? :)
[20:28] <mongo> Pedras, penguin computing, but I bought them with a switch and these awesome ASUS boxes
[20:28] <mongo> we have 10 3TB drives and two SSDs in 1U
[20:28] * thomnico (~thomnico@2a01:e35:8b41:120:68b6:1735:1e6b:b495) Quit (Ping timeout: 480 seconds)
[20:28] <ganders> mongo: what model of penguin computing server are you using?
[20:28] <Pedras> mongo: sorry are we talking of regular copper or twinax?
[20:29] <mongo> http://www.asus.com/Commercial_Servers_Workstations/RS300H8PS12/
[20:29] <ganders> 30TB raw in 1U and SSD that's pretty good
[20:29] <mongo> Pedras: twinax
[20:29] <Pedras> that is cheap, I think
[20:29] <mongo> ganders: they do not have them listed but they will sell them
[20:29] <Vacum> that is too cheap :)
[20:29] <mongo> ask for the IceBreaker 812.
[20:30] <mongo> Pedras: it is cheap, 1M is easy to do and is cheap if you aren't plugging into a SFP+ restricted switch like a cisco etc...
[20:31] <mongo> if you want to use SSDs for the journal they can only really do 10 3.5" drives as the box has 12 6Gb/s ports and 2 3Gb/s ports, unless you are fine with the diff in bus speed.
[20:31] <ganders> thx :)
[20:31] * Gamekiller77 (~Gamekille@128-107-239-234.cisco.com) Quit (Quit: This computer has gone to sleep)
[20:32] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[20:32] <Vacum> are those hdd trays howswap per drive?
[20:33] <mongo> NO, 3 drives per sled
[20:33] <mongo> not a big deal for ceph though.
[20:33] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[20:33] * Gamekiller77 (~Gamekille@128-107-239-235.cisco.com) has joined #ceph
[20:33] <Vacum> sure, if you either set the other two to noout manually - or be fast enough in replacing the one drive :)
[20:33] <mongo> there are two 2.5" SSD sleds in the back that are individual.
[20:34] <KaZeR> i have a weird issue. a couple of osd crashed, but it seems ceph is unable to recover. ceph -w gives me the status summary, one status line, then hangs, where it usually display a new status line every second
[20:34] <Kioob> mongo: I had just bought this http://www.asrockrack.com/general/productdetail.asp?Model=1U12LW-C2750 to test with ceph
[20:35] * Gamekiller77 (~Gamekille@128-107-239-235.cisco.com) Quit ()
[20:36] <mongo> Kioob: nice, probably cheaper. The built in 10GbE was the main reason I went with the other ones, these are for openstack so 10GbE is critical.
[20:37] * longguang (~chatzilla@123.126.33.253) Quit (Ping timeout: 480 seconds)
[20:37] <Kioob> Ok, I want to use it for ??cold storage??
[20:37] <mongo> We have 8 nodes of 4Xsleds per 4U filled with SSD for another cluster.
[20:38] <mongo> which will be in front of these systems if cache teiring works for our workload.
[20:39] <Vacum> mongo: did you already do a testdrive with your spinner based osd nodes?
[20:40] <mongo> yes, I made a test cluster with 15 pe1950s first, the SSD nodes were aquired due to a fear of spinning disk in management.
[20:47] * kanagaraj (~kanagaraj@27.7.17.236) Quit (Quit: Leaving)
[20:48] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[20:49] * longguang (~chatzilla@123.126.33.253) has joined #ceph
[20:52] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[20:53] * rendar (~I@87.19.182.38) Quit (Ping timeout: 480 seconds)
[20:54] * Sysadmin88 (~IceChat77@94.4.20.0) has joined #ceph
[20:56] * rendar (~I@87.19.182.38) has joined #ceph
[21:03] * allsystemsarego (~allsystem@79.115.170.45) Quit (Quit: Leaving)
[21:04] * hedin_ (~hedin@81.25.179.168) has joined #ceph
[21:05] * hedin (~hedin@81.25.179.168) Quit (Read error: Operation timed out)
[21:10] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[21:11] <burley> mongo: What SSD drives are you using and have you run any benchmarking for iops on them in the cluster?
[21:12] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:13] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) has joined #ceph
[21:15] <thiago> Is it possible configure a reliable ceph cluster of 2 nodes for data mirroring (just a raid1 over network) ?
[21:16] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[21:16] <Kioob> thiago: you should have 3 MON
[21:16] <Kioob> 3 monitor instances
[21:18] * qhartman (~qhartman@den.direwolfdigital.com) Quit (Quit: Ex-Chat)
[21:19] <thiago> Hmm .. So a minimum cluster for testing must have 3 servers. Right?
[21:19] <SpComb> thiago: it's not really possible to make a fully realible cluster of anything with just two nodes
[21:19] <SpComb> no quorum
[21:20] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[21:21] <SpComb> thiago: you can run the third mon separately though
[21:21] <Kioob> thiago: for testing, one monitor instance is enough. For a reliable cluster, you need at least 3
[21:21] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[21:22] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[21:23] * rweeks (~rweeks@pat.hitachigst.com) has joined #ceph
[21:26] <thiago> Today I have a 2 node failover cluster with drbd + ext4 . My goal is to have a solution where I do load balancing between 2 nodes. Do you think this is a situation that the Ceph can help me?
[21:27] * ircolle is now known as ircolle-afk
[21:27] <tnt_> For 2 nodes, I'd keep drbd.
[21:27] <Kioob> +1 tnt_
[21:28] <Kioob> And because of fragmentation, in a 2 node setup drbd is far faster than ceph. Ceph provide scalability.
[21:29] * hedin (~hedin@81.25.179.168) has joined #ceph
[21:31] * hedin_ (~hedin@81.25.179.168) Quit (Ping timeout: 480 seconds)
[21:35] * adamcrume_ (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) has joined #ceph
[21:36] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[21:37] * adamcrume (~quassel@2601:9:6680:47:4135:88aa:b17:eeb1) Quit (Ping timeout: 480 seconds)
[21:37] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[21:40] <thiago> I understand. Does make sense run a benchmark on a test cluster with 2 nodes?
[21:40] <thiago> Because I need to compare the performance with ext4 to justify the investment in the project.
[21:41] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:41] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[21:42] * hedin (~hedin@81.25.179.168) Quit (Ping timeout: 480 seconds)
[21:42] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) has joined #ceph
[21:45] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has left #ceph
[21:50] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Ping timeout: 480 seconds)
[21:53] * adamcrume_ (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[21:54] <cookednoodles> not really
[21:57] * dansni (~oftc-webi@206-248-161-220.dsl.teksavvy.com) has joined #ceph
[21:59] * mfa298 (~mfa298@gateway.yapd.net) Quit (Ping timeout: 480 seconds)
[22:01] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[22:02] <dansni> I'm using a line of code taken from ceph's docs: http://ceph.com/docs/master/rbd/librbdpy/
[22:02] <dansni> image = rbd.Image(ioctx, image_id)
[22:02] <dansni> but I get this error: https://gist.github.com/danielsnider/5fe86195ada00b36bb38
[22:02] <dansni> It's very weird, any ideas?
[22:05] <cookednoodles> sure your ioctx is right ?
[22:07] * mfa298 (~mfa298@gateway.yapd.net) has joined #ceph
[22:08] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[22:08] <dansni> Yeah I think so. I'm using "volumes" $ sudo ceph osd lspools
[22:08] <dansni> 0 data,1 metadata,2 rbd,3 images,4 volumes
[22:08] <dansni> ioctx = cluster.open_ioctx('volumes')
[22:11] <cookednoodles> with rados.Rados(conffile='/etc/ceph/ceph.conf') as cluster:
[22:11] <cookednoodles> volume_ioctx = cluster.open_ioctx('volumes')
[22:11] <cookednoodles> thats from 'my' code
[22:12] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[22:12] * Gamekiller77 (~Gamekille@128-107-239-235.cisco.com) has joined #ceph
[22:14] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[22:15] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[22:16] <SpComb> are you calling image.close() somewhere?
[22:16] <dansni> no, maybe I should?
[22:16] <SpComb> well, that's what's crashing
[22:17] <SpComb> be very careful with managing the open objects
[22:17] <SpComb> the ceph rbd python library seems to be a littly buggy with the refcount cleanup and likes to crash
[22:17] <dansni> should I close the ioctx too?
[22:18] <SpComb> dansni: the assert could potentially also be from some arbitrary exception that gets raised, and triggers that assert while unwinding the stack, thus loosing the exception
[22:18] * leseb (~leseb@81-64-215-19.rev.numericable.fr) Quit (Ping timeout: 480 seconds)
[22:26] * tdasilva (~quassel@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[22:26] * rotbeard (~redbeard@aftr-37-24-151-2.unity-media.net) has joined #ceph
[22:27] * leseb (~leseb@81-64-215-19.rev.numericable.fr) has joined #ceph
[22:28] <dansni> @SpComb Thank you!! I wasn't doing image.close()!
[22:28] <dansni> And there it was in the documentation too. My bad!
[22:28] <dansni> Thanks :)
[22:28] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Read error: Operation timed out)
[22:28] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[22:29] <SpComb> a valid python library *should* handle it, but the rbd one fails at it
[22:30] * garphy`aw is now known as garphy
[22:31] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[22:34] * dansni (~oftc-webi@206-248-161-220.dsl.teksavvy.com) Quit (Quit: Page closed)
[22:39] * qhartman (~qhartman@den.direwolfdigital.com) has joined #ceph
[22:39] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:43] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[22:43] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[22:46] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving)
[22:47] * madkiss (~madkiss@chello084112124211.20.11.vie.surfer.at) has joined #ceph
[22:47] <thiago> Thank you all!
[22:47] * thiago (~thiago@201.76.188.234) Quit (Quit: Saindo)
[22:49] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[22:53] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[22:56] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[22:56] * ChanServ sets mode +v andreask
[22:57] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[22:58] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[22:58] * fireD (~fireD@93-139-204-185.adsl.net.t-com.hr) Quit (Quit: leaving)
[23:03] * ircolle-afk is now known as ircolle
[23:06] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[23:06] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[23:08] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) has joined #ceph
[23:10] * kevinc (~kevinc__@client65-44.sdsc.edu) has joined #ceph
[23:13] * ikrstic (~ikrstic@178-223-50-75.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[23:13] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[23:15] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[23:15] * andreask (~andreask@zid-vpnn123.uibk.ac.at) has joined #ceph
[23:15] * ChanServ sets mode +v andreask
[23:16] * andreask (~andreask@zid-vpnn123.uibk.ac.at) has left #ceph
[23:17] * TiCPU (~jeromepou@190-130.cgocable.ca) has joined #ceph
[23:19] * markbby1 (~Adium@168.94.245.4) Quit (Quit: Leaving.)
[23:20] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Ping timeout: 480 seconds)
[23:21] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[23:26] * bkopilov (~bkopilov@213.57.17.91) Quit (Read error: Operation timed out)
[23:27] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[23:30] * sjm (~sjm@143.115.158.238) has left #ceph
[23:30] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[23:39] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (Ping timeout: 480 seconds)
[23:40] * kevinc (~kevinc__@client65-44.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[23:45] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[23:56] * Gamekiller77 (~Gamekille@128-107-239-235.cisco.com) Quit (Read error: Operation timed out)
[23:59] * haomaiwang (~haomaiwan@106.38.204.62) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.