#ceph IRC Log

Index

IRC Log for 2014-08-03

Timestamps are in GMT/BST.

[0:06] * Jakey (uid1475@id-1475.uxbridge.irccloud.com) Quit (Quit: Connection closed for inactivity)
[0:13] * sputnik13 (~sputnik13@99.166.16.162) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:17] <rektide> are development packages available?
[0:18] <rektide> does anyone know how i can pin ceph-extra in apt to be favored?
[0:24] * rendar (~I@87.19.176.94) Quit ()
[0:26] <rektide> ooooh, sorry, dev packages: http://ceph.com/docs/master/install/get-packages/#add-ceph-development
[0:30] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:51] * steki (~steki@212.200.65.129) Quit (Ping timeout: 480 seconds)
[1:06] * jharley (~jharley@69-196-143-186.dsl.teksavvy.com) has joined #ceph
[1:15] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) Quit (Ping timeout: 480 seconds)
[1:16] * oms101 (~oms101@p20030057EA3A8100EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:24] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[1:24] * oms101 (~oms101@p20030057EA5C9F00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:27] * WhIteSidE (~chatzilla@wsip-70-184-76-157.tc.ph.cox.net) has joined #ceph
[1:27] <WhIteSidE> Hello all
[1:28] <WhIteSidE> I'm trying to install radosgw on CentOS 6.5
[1:28] <WhIteSidE> I cannot find the radosgw init script
[1:28] <WhIteSidE> Is there still a separate init script? Or is FastCGI supposed to start it automatically?
[1:29] <danieljh> WhIteSidE: check for ceph-radosgw
[1:30] <WhIteSidE> Yeah, that's the package I have installed
[1:30] <danieljh> I'm not sure (don't have my work's environment here) but I think it's called ceph-radosgw.
[1:30] <danieljh> in /etc/init.d/
[1:30] <danieljh> instead of just radosgw (what the tutorial says)
[1:30] <WhIteSidE> # ls /etc/init.d/c*
[1:30] <WhIteSidE> /etc/init.d/ceph /etc/init.d/collectd /etc/init.d/cpuspeed /etc/init.d/crond /etc/init.d/cups
[1:31] <WhIteSidE> Where do the init scripts live in the repo? I'll try installing one manually
[1:34] * fdmanana (~fdmanana@bl5-245-222.dsl.telepac.pt) Quit (Quit: Leaving)
[1:34] <danieljh> WhIteSidE: on CentOS 6.5 I found it at: /etc/init.d/ceph-radosgw -- I did this just two days ago
[1:34] <danieljh> could you check if it really is installed?
[1:37] <WhIteSidE> yep
[1:37] <WhIteSidE> http://pastebin.com/FCHatJ6G
[1:38] <WhIteSidE> For instance, "radosgw-admin --cluster mia1 --conf /etc/ceph/mia1.conf user info --uid=UID" is working
[1:40] <danieljh> Hmmm interesting: yours is from epel, mine from repo ceph instead.. unfortunately I'm no expert in ceph's packaging system, but this might be of interest to you.
[1:43] <WhIteSidE> Hmm
[1:43] <WhIteSidE> Thanks
[1:44] <WhIteSidE> I just pulled a copy of the init script from git and it seems to be working. I'll find the packager and let them know
[2:08] * zerick (~Erick@190.118.43.55) has joined #ceph
[2:10] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[2:30] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) has joined #ceph
[2:35] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[2:37] * sjm (~sjm@108.53.250.33) has joined #ceph
[2:37] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[2:38] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Remote host closed the connection)
[2:38] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) has joined #ceph
[2:40] * jhujhiti (~jhujhiti@00012a8b.user.oftc.net) has joined #ceph
[2:40] <jhujhiti> hey guys, i've got a bit of a problem. i just moved a mon cluster to a new datacenter *prior* to reading the docs (i know...) so i need to re-ip the mons using the "messy" way here: http://ceph.com/docs/master/rados/operations/add-or-rm-mons/#changing-a-monitor-s-ip-address
[2:41] <jhujhiti> since i've already moved them, i can't "ceph mon getmap". is there another way to get this map?
[2:45] * aknapp (~aknapp@ip68-99-237-112.ph.ph.cox.net) Quit (Read error: Operation timed out)
[2:48] <jhujhiti> ah, /var/lib/ceph/mon/*/monmap/...
[2:52] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[2:54] * ismell (~ismell@host-24-52-35-110.beyondbb.com) Quit (Read error: Operation timed out)
[3:06] * d3fault (~user@ip70-171-243-167.tc.ph.cox.net) Quit (Quit: Leaving)
[3:26] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[3:27] * WhIteSidE (~chatzilla@wsip-70-184-76-157.tc.ph.cox.net) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 30.0/20140605174243])
[3:27] * Nacer (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) has joined #ceph
[3:29] * LeaChim (~LeaChim@host86-161-89-237.range86-161.btcentralplus.com) Quit (Read error: Operation timed out)
[3:37] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[3:43] * Nacer (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) Quit (Remote host closed the connection)
[3:43] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) Quit (Remote host closed the connection)
[3:47] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:50] * joerocklin (~joe@cpe-65-185-149-56.woh.res.rr.com) has joined #ceph
[3:52] * darkling (~hrm@246.244.187.81.in-addr.arpa) has joined #ceph
[4:15] * zerick (~Erick@190.118.43.55) Quit (Remote host closed the connection)
[4:16] * zerick (~Erick@190.118.43.55) has joined #ceph
[4:18] * hasues (~hazuez@12.216.44.38) has joined #ceph
[4:23] * oblu (~o@62.109.134.112) has joined #ceph
[4:26] * oblu (~o@62.109.134.112) Quit ()
[4:27] * jharley (~jharley@69-196-143-186.dsl.teksavvy.com) Quit (Quit: jharley)
[4:33] * oblu (~o@62.109.134.112) has joined #ceph
[5:06] <chowmeined> So all the warnings I'm seeing about putting too many OSD journals on a single SSD, thats mainly about performance correct? For example, if I had 12 spinners and wanted to put all 12 journals on a pci-express SSD, would that have an issue?
[5:28] * Vacum_ (~vovo@i59F79AAC.versanet.de) has joined #ceph
[5:35] * Vacum (~vovo@i59F7947F.versanet.de) Quit (Ping timeout: 480 seconds)
[5:45] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[5:46] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[5:49] * kissss (~kisss@90.174.5.254) has joined #ceph
[5:50] * vbellur (~vijay@122.172.243.14) Quit (Ping timeout: 480 seconds)
[5:52] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Remote host closed the connection)
[5:59] * vbellur (~vijay@122.167.220.189) has joined #ceph
[6:06] * hflai (~hflai@alumni.cs.nctu.edu.tw) Quit (Remote host closed the connection)
[6:09] * hasues (~hazuez@12.216.44.38) Quit (Quit: Leaving.)
[6:47] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[6:54] * oblu (~o@62.109.134.112) has joined #ceph
[8:08] * darkling (~hrm@246.244.187.81.in-addr.arpa) Quit (Ping timeout: 480 seconds)
[8:29] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[8:40] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:50] * ikrstic (~ikrstic@109-93-162-27.dynamic.isp.telekom.rs) has joined #ceph
[9:04] <longguang> hi
[9:10] <chowmeined> hi
[9:11] * sjm (~sjm@108.53.250.33) has left #ceph
[9:13] * finster (~finster@cmdline.guru) has joined #ceph
[9:28] <Vacum_> chowmeined: regarding your question 4 hours ago :) it would also mean this SSD is a single point of failure for 12 OSDs.
[9:28] <chowmeined> Vacum_, okay, maybe I could see if two cards per node fits
[9:28] <chowmeined> fits my budget :)
[9:29] <chowmeined> Do PCI-Express SSDs really fail much? I was looking at the Intel P3700/P3600
[9:29] <Vacum_> chowmeined: and all that is written to all osds goes through that ssd. I have no idea how many "TB written" is the current standard for pci-e SSDs.
[9:30] <chowmeined> Ah, Im pretty sure it'll cover us there, we're averaging about 2-3TB/day in writes
[9:30] <chowmeined> the P3700 can handle up to 10PBw
[9:30] <Vacum_> not too bad :)
[9:31] <chowmeined> So the warning is really about not treating the SSDs like magic? Rather than some sort of bottleneck that can happen if many journals are on a single device
[9:31] <chowmeined> bottleneck in Ceph*
[9:31] <chowmeined> or the kernel perhaps
[9:32] <Vacum_> chowmeined: I have to admit, we do not use SSDs for the journals. we put the on the spinners. with enough spinners the network is still the bottleneck that way :)
[9:33] <Vacum_> so I can't comment on single SSDs for all journals being a bottleneck or not
[9:33] <chowmeined> Ah. We're currently evaluating InfiniBand to help with the network. Our application is very latency sensitive so I'd like to offload to SSD where possible
[9:35] <Vacum_> chowmeined: ceph uses the journal partitions "raw", there is no filesystem between. Did you investigate what this means regarding TRIM?
[9:36] <chowmeined> Ouch. Thats a good point, hadn't thought of that
[9:38] <Vacum_> and you will probably need to set the i/o scheduler for those partitions to noop.
[9:39] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) has joined #ceph
[9:41] <chowmeined> "Performance specifications apply to both compressible and incompressible data". Interesting, it seems Intel's controller may not depend on as many tricks for its performance as some.
[9:42] <Vacum_> chowmeined: with 12 journals on the same drive you can expect many small fsyncs. every write to an osd is returned only after it was sync'ed to at leat min_size journals iirc
[9:54] * steki (~steki@212.200.65.137) has joined #ceph
[9:55] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) Quit (Quit: Leaving.)
[10:07] <chowmeined> Vacum_, interesting discussion on trim: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-December/006461.html
[10:07] <chowmeined> given the small size of journals, and we'd be using 400GB cards, we could trim it upfront and then underprovision it by 300GB
[10:10] <Vacum_> chowmeined: sounds OK then
[10:11] <Vacum_> chowmeined: the 400GB model does "only" 1GB sequential write (without reads in between!). that is less than a single 10GbE port
[10:12] <chowmeined> yes, I think we'll end up using 2 cards
[10:12] <Vacum_> with 12 journals you will have 12 linear streams, but at 12 different positions on the drive. so not really "random", but still scattered.
[10:12] <chowmeined> but each node only has 12 spinners anyways
[10:13] <Vacum_> chowmeined: it pretty much depends on your usecase then :)
[10:14] <chowmeined> Dont we need extra network bw for the OSD to use for cluster communication/replication?
[10:15] <Vacum_> chowmeined: not neccessarily. it can also run on the same network as the client traffic. separating both has benefits though
[10:16] <chowmeined> If we do go with InfiniBand, splitting them will not be a problem
[10:16] <Vacum_> chowmeined: your setup definitely sounds interesting :) and lot of $/TB :)
[10:17] <chowmeined> Do you know of a list of drives Ceph users have had the best luck with? Like, 2-4TB SATA drives
[10:18] <Vacum_> chowmeined: no list, sorry. we are using 4TB SAS drives of different vendors. working out fine until now.
[10:19] <chowmeined> Yeah, a lot of $/TB... Is there anything inherent to Ceph that would prevent it from being low latency? Say, average <5ms?
[10:19] <Vacum_> chowmeined: and of course we saw a ~2%-3% early death rate of those drives
[10:20] <chowmeined> our use case is high bw writes, low latency reads
[10:20] <chowmeined> random writes*
[10:20] <Vacum_> chowmeined: 5ms is very ambitious. your client has to issue the write request. it goes to the primary PG's OSD. which starts writing to its journal. and sends the request to all other replicas in parallel
[10:21] <Vacum_> once at least min_size-1 replicas have finished their journal write, plus the primary PG, your request will return to the client successful
[10:21] <chowmeined> Vacum_, If a replica OSD is unresponsive, does it hang the iop? Is there a timeout? Can it send it to 5 replicas and wait for 3?
[10:21] <chowmeined> ah, okay so min_size would help there
[10:22] <Vacum_> chowmeined: yes. appearently yes, but we didnt manage to get a client to timeout. yes
[10:22] * Nacer (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) has joined #ceph
[10:23] <Vacum_> chowmeined: there are a bunch of PG states in which all traffic will hang, ie peering. or if a PG does not have min_size replicas available
[10:23] <chowmeined> hm, interesting. Is Ceph perhaps not the right solution for this use case?
[10:24] <Vacum_> chowmeined: I can't tell, I don't know your usecase. except the <5ms requirement
[10:25] <chowmeined> high sustained random write i/o, low latency random read i/o (ideally <5ms). 80% write/20% read. The high availability, scaling and flexibility features of Ceph would be very useful.
[10:25] <Vacum_> chowmeined: our main concern is throughput and durability, not very small latency. so we did not put too much time into the latency topic
[10:26] <chowmeined> Are you a Ceph developer? Or are you speaking in terms of your deployment
[10:26] <Vacum_> ah, deployment!
[10:26] <Vacum_> sorry :)
[10:26] <chowmeined> wasn't sure if you meant Ceph wasnt designed with latency in mind or not
[10:27] <chowmeined> np :)
[10:27] <Vacum_> no, that was not what I wanted to express. :)
[10:29] <Vacum_> having said that, we of course do measure rados latency on our clients. and one thing has become obvious: during normal operation ("HEALTH_OK"), latency is pretty stable. if something happens, (an OSD dies. or you add in new hosts. etc) a part of the client requests will be blocked for some time
[10:31] <Vacum_> chowmeined: and of course due to statistics, it can happend that a bunch of client requests end up on the same OSD at the same time.
[10:32] <Vacum_> chowmeined: btw, you use spinners. the reads will not go to the journal osds, they will go to the spinners. not sure how you can guarantee <5ms there, regardless of the used software
[10:34] <Vacum_> chowmeined: spinners are ugly. we had 2 spinners (out of thousands) showing no issues with smart, no read errors etc. but spiking read latencies upto 1 second. under normal load. all other spinners in the same nodes were working fine. so a single spinner can result in 1/n-th of all requests becoming slow, with n being the number of OSDs that host primary PG replicas
[10:37] <chowmeined> Is there a way to have Ceph fan-out reads and use whichever response comes first?
[10:37] * kissss (~kisss@90.174.5.254) Quit (autokilled: Please do not spam on IRC. Email support@oftc.net with questions. (2014-08-03 08:37:39))
[10:39] <Vacum_> chowmeined: this had been discussed in the last Ceph Developer Summit as potential new feature. iirc rados does allow since Firefly to read from the "nearest" (in the crush hierarchy) OSD instead of the primary
[10:39] <chowmeined> I see, interesting
[10:40] <chowmeined> but you're very right. Our current approach is using plenty of RAM and caching reads
[10:40] <Vacum_> chowmeined: not sure about the implications though :)
[10:40] <chowmeined> so we can continue with that
[10:41] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[10:43] <chowmeined> I have a 3 node test setup going. The read latency has been pretty good so far.
[10:44] <Vacum_> chowmeined: did you try to randomly read more than fits into the filesystem cache of the OSD hosts? :)
[10:45] <chowmeined> I believe so, but I'll make it 80GB and see how it goes.
[10:46] <Vacum_> chowmeined: so you are running with a replica size of 2 ?
[10:46] <chowmeined> currently it is3
[10:46] <Vacum_> with 3 nodes? are you using firefly with crush tunables set to optimal?
[10:47] <Vacum_> time for breakfast here, bbl :)
[10:47] <chowmeined> 3 nodes. I haven't changed the tunables. 26 spinners, not using SSD journals. 4400 random 8k read iops, 14ms avg latency (mediocre)
[10:47] <chowmeined> 80GB test file
[11:12] <Vacum_> 170 i/os per spinner is pretty good
[11:13] * jtaguinerd1 (~Adium@112.205.0.122) has joined #ceph
[11:14] * Nacer (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) Quit (Remote host closed the connection)
[11:14] <chowmeined> If I throttle the iops for this test down to 2000 then average latency falls to 7ms which would be fine
[11:16] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[11:16] <Vacum_> so the question is if more hosts will get you more iops. likely yes
[11:17] <chowmeined> Initially I tested with a smaller config, the iops capacity of the system appears to be scaling roughly linearly
[11:18] <chowmeined> this is pretty amazing software, I have to say
[11:18] <Vacum_> definitely!
[11:27] * Nacer (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) has joined #ceph
[11:29] * dis (~dis@109.110.67.234) Quit (Ping timeout: 480 seconds)
[11:29] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[11:45] * LeaChim (~LeaChim@host86-161-89-237.range86-161.btcentralplus.com) has joined #ceph
[11:45] * jtaguinerd (~Adium@112.205.0.122) has joined #ceph
[11:52] * jtaguinerd1 (~Adium@112.205.0.122) Quit (Ping timeout: 480 seconds)
[11:58] * Sysadmin88 (~IceChat77@2.218.9.98) has joined #ceph
[12:16] * jtaguinerd (~Adium@112.205.0.122) Quit (Quit: Leaving.)
[12:18] * steki (~steki@212.200.65.137) Quit (Ping timeout: 480 seconds)
[12:18] * Nacer (~Nacer@lap34-h03-213-44-220-9.dsl.sta.abo.bbox.fr) Quit (Remote host closed the connection)
[12:21] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[12:32] * Eco (~eco@adsl-99-105-55-80.dsl.pltn13.sbcglobal.net) Quit (Read error: Operation timed out)
[12:36] * rendar (~I@host17-179-dynamic.56-79-r.retail.telecomitalia.it) has joined #ceph
[12:52] * fdmanana (~fdmanana@bl5-245-222.dsl.telepac.pt) has joined #ceph
[13:36] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:42] * joao|lap (~JL@78.29.191.247) has joined #ceph
[13:42] * ChanServ sets mode +o joao|lap
[13:47] * fdmanana (~fdmanana@bl5-245-222.dsl.telepac.pt) Quit (Quit: Leaving)
[13:47] * bkopilov (~bkopilov@213.57.16.172) Quit (Read error: Connection reset by peer)
[13:49] * pressureman_ (~daniel@g225161119.adsl.alicedsl.de) has joined #ceph
[13:50] * Sysadmin88 (~IceChat77@2.218.9.98) Quit (Read error: Connection reset by peer)
[14:01] * bkopilov (~bkopilov@213.57.16.16) has joined #ceph
[14:03] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[14:08] * sbadia (~sbadia@yasaw.net) Quit (Quit: Bye)
[14:28] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[14:34] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[14:37] * fejjerai (~quassel@corkblock.jefferai.org) Quit (Ping timeout: 480 seconds)
[14:45] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) Quit (Ping timeout: 480 seconds)
[14:48] * bkopilov (~bkopilov@213.57.16.16) Quit (Ping timeout: 480 seconds)
[14:49] * zack_dolby (~textual@p8505b4.tokynt01.ap.so-net.ne.jp) has joined #ceph
[14:49] * b0e (~aledermue@p200300784F6285A1FEF8AEFFFE2A3D98.dip0.t-ipconnect.de) has joined #ceph
[14:55] * ismell (~ismell@host-24-52-35-110.beyondbb.com) has joined #ceph
[14:57] * pressureman_ (~daniel@g225161119.adsl.alicedsl.de) Quit (Quit: Ex-Chat)
[15:00] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) has joined #ceph
[15:01] * ahmett (~horasan@85.97.133.149) has joined #ceph
[15:01] * ahmett (~horasan@85.97.133.149) Quit (autokilled: Do not spam other people. Mail support@oftc.net if you feel this is in error. (2014-08-03 13:01:56))
[15:13] * baylight (~tbayly@204.15.85.169) has joined #ceph
[15:14] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[15:16] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[15:23] * b0e (~aledermue@p200300784F6285A1FEF8AEFFFE2A3D98.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[15:31] * bkopilov (~bkopilov@213.57.16.16) has joined #ceph
[15:32] * allsystemsarego (~allsystem@79.115.170.35) has joined #ceph
[15:55] * andreask (~andreask@91.224.48.154) has joined #ceph
[15:55] * ChanServ sets mode +v andreask
[15:56] * andreask (~andreask@91.224.48.154) has left #ceph
[15:57] * bandrus (~Adium@216.57.72.205) has joined #ceph
[15:58] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[16:09] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Ping timeout: 480 seconds)
[16:10] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[16:23] * Sysadmin88 (~IceChat77@2.218.9.98) has joined #ceph
[16:24] * bandrus (~Adium@216.57.72.205) Quit (Quit: Leaving.)
[16:29] * cfreak200 (~cfreak200@p4FF3EAC7.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[16:33] * cury (~cury@114.248.224.218) Quit (Quit: Leaving)
[16:45] * fdmanana (~fdmanana@bl5-245-222.dsl.telepac.pt) has joined #ceph
[17:09] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[17:11] * jhujhiti (~jhujhiti@00012a8b.user.oftc.net) has left #ceph
[17:36] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[17:40] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) has joined #ceph
[17:50] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:52] * lupu (~lupu@86.107.101.214) has joined #ceph
[18:10] * ikrstic (~ikrstic@109-93-162-27.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[18:19] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[18:20] * hasues (~hazuez@12.216.44.38) has joined #ceph
[18:39] * soneedu (~oftc-webi@203.189.156.109) has joined #ceph
[18:40] * soneedu (~oftc-webi@203.189.156.109) Quit ()
[18:40] * soneedu (~oftc-webi@116.212.137.13) has joined #ceph
[18:42] * soneedu (~oftc-webi@116.212.137.13) has left #ceph
[18:42] * soneedu_ (~oftc-webi@203.189.156.109) has joined #ceph
[18:53] * soneedu_ (~oftc-webi@203.189.156.109) Quit (Quit: Page closed)
[18:59] * joao|lap (~JL@78.29.191.247) Quit (Ping timeout: 480 seconds)
[19:01] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Remote host closed the connection)
[19:09] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[19:11] * Nacer_ (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[19:11] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[19:13] * MaZ- (~maz@00016955.user.oftc.net) Quit (Quit: WeeChat 0.4.2-dev)
[19:22] * MaZ- (~maz@00016955.user.oftc.net) has joined #ceph
[19:25] * burley (~khemicals@185.sub-70-208-199.myvzw.com) has joined #ceph
[19:34] * andreask (~andreask@91.224.48.154) has joined #ceph
[19:34] * ChanServ sets mode +v andreask
[19:35] * andreask (~andreask@91.224.48.154) has left #ceph
[19:41] * burley (~khemicals@185.sub-70-208-199.myvzw.com) Quit (Ping timeout: 480 seconds)
[19:49] * burley (~khemicals@141.sub-70-208-194.myvzw.com) has joined #ceph
[19:59] * nhm (~nhm@184-97-150-107.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[20:06] * madkiss (~madkiss@2001:6f8:12c3:f00f:f029:9a21:b11e:453) has joined #ceph
[20:10] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:d75:c896:bee8:da30) has joined #ceph
[20:14] * madkiss (~madkiss@2001:6f8:12c3:f00f:f029:9a21:b11e:453) Quit (Ping timeout: 480 seconds)
[20:18] * rendar (~I@host17-179-dynamic.56-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:19] * burley (~khemicals@141.sub-70-208-194.myvzw.com) Quit (Quit: burley)
[20:21] * rendar (~I@host17-179-dynamic.56-79-r.retail.telecomitalia.it) has joined #ceph
[20:25] * KaZeR (~kazer@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[20:28] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[20:36] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[20:51] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[21:03] * fdmanana (~fdmanana@bl5-245-222.dsl.telepac.pt) Quit (Quit: Leaving)
[21:25] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[21:25] * BManojlovic (~steki@178-221-116-161.dynamic.isp.telekom.rs) has joined #ceph
[21:26] * baylight (~tbayly@204.15.85.169) Quit (Read error: Operation timed out)
[21:31] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[21:34] * DV (~veillard@veillard.com) has joined #ceph
[21:35] * steki (~steki@178-221-116-161.dynamic.isp.telekom.rs) has joined #ceph
[21:35] * BManojlovic (~steki@178-221-116-161.dynamic.isp.telekom.rs) Quit (Read error: Connection reset by peer)
[21:38] * hasues (~hazuez@12.216.44.38) Quit (Quit: Leaving.)
[21:39] * KaZeR (~kazer@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[21:41] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[21:55] * jharley (~jharley@173.230.163.47) has joined #ceph
[22:04] * jharley (~jharley@173.230.163.47) Quit (Quit: jharley)
[22:07] * BManojlovic (~steki@178-221-116-161.dynamic.isp.telekom.rs) has joined #ceph
[22:08] * steki (~steki@178-221-116-161.dynamic.isp.telekom.rs) Quit (Read error: Connection reset by peer)
[22:17] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[22:27] * Nacer_ (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Remote host closed the connection)
[22:27] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[22:36] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[22:39] * allsystemsarego (~allsystem@79.115.170.35) Quit (Quit: Leaving)
[23:00] * Nacer (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[23:01] * Nacer_ (~Nacer@pai34-4-82-240-124-12.fbx.proxad.net) has joined #ceph
[23:01] * dgbaley27 (~matt@c-98-245-167-2.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[23:25] * partner (joonas@ajaton.net) Quit (Quit: Lost terminal)
[23:27] * sputnik13 (~sputnik13@cpe-66-75-235-71.san.res.rr.com) has joined #ceph
[23:30] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) has joined #ceph
[23:33] * baylight (~tbayly@204.15.85.169) has joined #ceph
[23:39] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[23:40] * sputnik13 (~sputnik13@cpe-66-75-235-71.san.res.rr.com) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[23:47] * JC1 (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) has joined #ceph
[23:49] * baylight (~tbayly@204.15.85.169) has left #ceph
[23:51] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) has joined #ceph
[23:52] * JC (~JC@AMontpellier-651-1-420-97.w92-133.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.