#ceph IRC Log

Index

IRC Log for 2012-12-25

Timestamps are in GMT/BST.

[0:01] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[0:19] * loicd (~loic@magenta.dachary.org) has joined #ceph
[0:59] * mgalkiewicz (~mgalkiewi@89-74-85-171.dynamic.chello.pl) has joined #ceph
[1:27] * roald (~Roald@87.209.150.214) Quit (Quit: Leaving)
[1:40] * ScOut3R (~ScOut3R@1F2EA078.dsl.pool.telekom.hu) has joined #ceph
[1:43] * ScOut3R (~ScOut3R@1F2EA078.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[1:48] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[1:58] * joao-mobile (~androirc@a95-93-146-10.cpe.netcabo.pt) has joined #ceph
[2:06] * todin (tuxadero@kudu.in-berlin.de) Quit (Read error: Operation timed out)
[2:07] * joao-mobile (~androirc@a95-93-146-10.cpe.netcabo.pt) Quit (Remote host closed the connection)
[2:08] * joao-mobile (~androirc@a95-93-146-10.cpe.netcabo.pt) has joined #ceph
[2:09] * joao-mobile (~androirc@a95-93-146-10.cpe.netcabo.pt) Quit ()
[2:09] * f4m8 (f4m8@kudu.in-berlin.de) Quit (Ping timeout: 480 seconds)
[2:15] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[2:25] * jmlowe1 (~Adium@c-71-201-31-207.hsd1.in.comcast.net) Quit (Quit: Leaving.)
[3:13] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[3:29] * gohko (~gohko@natter.interq.or.jp) has joined #ceph
[3:29] * gohko_ (~gohko@natter.interq.or.jp) Quit (Read error: Connection reset by peer)
[3:30] * gohko_ (~gohko@natter.interq.or.jp) has joined #ceph
[3:30] * gohko (~gohko@natter.interq.or.jp) Quit (Read error: Connection reset by peer)
[4:21] * mgalkiewicz (~mgalkiewi@89-74-85-171.dynamic.chello.pl) Quit (Ping timeout: 480 seconds)
[5:17] * lx0 is now known as lxo
[5:37] * scuttlemonkey (~scuttlemo@96-42-146-5.dhcp.trcy.mi.charter.com) Quit (Quit: This computer has gone to sleep)
[5:44] * noob2 (~noob2@ext.cscinfo.com) Quit (Quit: Leaving.)
[5:45] * stp__ (~stp@dslb-084-056-048-076.pools.arcor-ip.net) has joined #ceph
[5:53] * stp (~stp@dslb-084-056-011-102.pools.arcor-ip.net) Quit (Ping timeout: 480 seconds)
[6:01] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) has joined #ceph
[7:11] * Etherael (~eric@node-eor.pool-125-24.dynamic.totbb.net) has joined #ceph
[7:16] * Etherael1 (~eric@node-fim.pool-101-108.dynamic.totbb.net) Quit (Ping timeout: 480 seconds)
[8:14] * gregorg (~Greg@78.155.152.6) Quit (Read error: Operation timed out)
[8:14] * gregorg (~Greg@78.155.152.6) has joined #ceph
[8:38] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[8:39] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[9:17] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[9:26] * BManojlovic (~steki@85.222.183.165) has joined #ceph
[9:29] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:37] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[9:49] * Cube1 (~Cube@107.38.71.134) has joined #ceph
[9:49] * Cube (~Cube@107.38.71.134) Quit (Read error: Connection reset by peer)
[9:51] * gregorg (~Greg@78.155.152.6) has joined #ceph
[9:55] * gregorg_taf (~Greg@78.155.152.6) has joined #ceph
[9:55] * gregorg (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[9:59] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[10:02] * gregorg (~Greg@78.155.152.6) has joined #ceph
[10:02] * gregorg_taf (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[10:04] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[10:11] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[10:12] * loicd (~loic@magenta.dachary.org) has joined #ceph
[10:16] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[10:21] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[10:44] * joao (~JL@89-181-150-243.net.novis.pt) Quit (Ping timeout: 480 seconds)
[10:57] * BManojlovic (~steki@85.222.183.165) Quit (Quit: Ja odoh a vi sta 'ocete...)
[11:02] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[11:07] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) Quit (Quit: drokita)
[11:37] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[11:37] * loicd (~loic@magenta.dachary.org) has joined #ceph
[12:20] * Leseb (~Leseb@bea13-1-82-228-104-16.fbx.proxad.net) has joined #ceph
[12:26] * Leseb (~Leseb@bea13-1-82-228-104-16.fbx.proxad.net) Quit (Quit: Leseb)
[13:04] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[13:14] * mgalkiewicz (~mgalkiewi@89-74-85-171.dynamic.chello.pl) has joined #ceph
[13:41] * f4m8 (f4m8@kudu.in-berlin.de) has joined #ceph
[13:50] * Aiken (~Aiken@2001:44b8:2168:1000:21f:d0ff:fed6:d63f) Quit (Remote host closed the connection)
[14:15] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) has joined #ceph
[14:40] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[14:50] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) Quit (Quit: Leaving.)
[14:52] * danieagle (~Daniel@177.99.132.219) has joined #ceph
[14:53] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[14:54] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) has joined #ceph
[14:57] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[14:58] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[15:10] * danieagle (~Daniel@177.99.132.219) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[15:16] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[15:16] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) Quit (Quit: Leaving.)
[15:21] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) has joined #ceph
[15:33] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) Quit (Quit: Leaving.)
[15:39] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) has joined #ceph
[15:54] * The_Bishop (~bishop@e179009244.adsl.alicedsl.de) has joined #ceph
[15:57] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) Quit (Quit: Leaving.)
[15:58] * Leseb (~Leseb@bea13-1-82-228-104-16.fbx.proxad.net) has joined #ceph
[16:02] * Leseb (~Leseb@bea13-1-82-228-104-16.fbx.proxad.net) Quit ()
[16:06] * Cube1 (~Cube@107.38.71.134) Quit (Quit: Leaving.)
[16:13] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[16:14] * dshea (~dshea@masamune.med.harvard.edu) Quit (Remote host closed the connection)
[16:31] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) has joined #ceph
[16:43] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) Quit (Quit: Leaving.)
[16:45] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) has joined #ceph
[17:03] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[17:08] * styx-tdo (~styx@000146b8.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:13] * Machske (~bram@d5152D87C.static.telenet.be) has joined #ceph
[17:14] <Machske> hi guys, I've got a performance question. I'm running a ceph cluster with 3 hosts each with 4 disks of 1TB (sata). Each disk is an OSD so in total we have 12 osd's, 3mons and 2 mds (1active/1 standby).
[17:15] <Machske> We're using cephfs for testing it's robustness and performance.
[17:15] <Vjarjadian> sounds like a nice setup.
[17:15] <Machske> Whenever we have a recovery or remapping of pgs's due to a reweight or so, cephfs is terribly slow, near to unusable.
[17:16] <Machske> is this to be expected as cephfs is not yet production ready ?N
[17:16] <Vjarjadian> so basically you take an OSD offline to trigger a self heal and it becomes very slow?
[17:17] <Machske> yes, any type of self heal and it becomes very slow upto unusable. Same when adding an osd
[17:17] <Vjarjadian> could be that it takes all the bandwidth to do the self heal
[17:17] <Vjarjadian> since thats a priority
[17:17] <Vjarjadian> especially with 4tb drives... lots of data to shift around on 125MBs network equipment
[17:18] <Machske> That was my first idea, but I measured the bandwidth and there is plenty bandwidth left on all hosts. They're on gigabit, but only consuming 100-200 mbit
[17:18] <Vjarjadian> does the slowness end after a while?
[17:18] <Machske> netperf tests show a possible bandwidth of 890Mbit in tcp mode
[17:18] <Vjarjadian> in non sequential reads/writes, HDDs are a lot slower
[17:19] <Machske> slowness is completely gone after self heal.
[17:19] <Vjarjadian> then it's the self heal consuming the resources
[17:19] <Vjarjadian> testing the CPU usage of your OSDs?
[17:19] <Machske> with iostat -x I could see a high busy time for the drives indeed
[17:20] <Machske> cpu usage seems fine as in the machines are all dual octo cores
[17:20] <Vjarjadian> might be the problem would be less noticable with 30 drives rather than 3...
[17:20] <Vjarjadian> 30 hosts i meant
[17:20] <Machske> because atm one host down means 30% capacity unavailable
[17:21] <Machske> well next week I have an environment available with 10 hosts
[17:21] <Vjarjadian> thats a lot of data to recover with 4 OSDs going down at once
[17:21] <Machske> I'll do the same testing on that env
[17:21] <Machske> That I can see
[17:22] <Machske> but still, when running 2 hosts with each 4 osd's and they are in sync, all is ok
[17:22] <Vjarjadian> the way i read that ceph works could mean that proportion of the OSDs going down is too much strain...
[17:22] <Machske> adding the 3 host with 4 osd's caused havoc during rebalancing
[17:22] <Vjarjadian> try dropping just 1 OSD on one host... and see if you get the same problem
[17:23] <Machske> ok I'll do that
[17:23] <Vjarjadian> any rebalance would cause a lot of disk IO
[17:23] <Machske> Is there a good way to throttle the rebalance so it leaves IO's left for applications ?
[17:25] <Vjarjadian> no idea... but would you prioritize data availability or data security?
[17:25] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[17:26] <Machske> well I guess it's a balance. Indeed data security is very important. But production unavailablility would result realtime loss in revenue.
[17:26] <Machske> Googling:http://comments.gmane.org/gmane.comp.file-systems.ceph.devel/8932
[17:26] <Vjarjadian> once you've tested with 10 hosts... it might be less noticable
[17:26] <Machske> maybe I should test osd recovery max active = 1
[17:26] <Vjarjadian> as more IOs and network bandwidth available
[17:26] <Machske> I will do that to see how big a difference it makes
[17:27] <Machske> next week, I have the 10 hosts available, I'm very interested to see how it copes
[17:27] <Machske> thx for the feedback!
[17:27] <Vjarjadian> i'm planning to start testing Ceph soon... been doing some reading on how it works
[17:28] <Machske> I'm looking for an alternative to gluster
[17:28] <Vjarjadian> i need the geo-replication feature before i can use it in production
[17:28] <Vjarjadian> otherwise my storage will slow down to about 100kb/s
[17:28] <Machske> but gluster just seems to unstable when sh*t hits the fan
[17:28] <Vjarjadian> lol
[17:28] <Machske> :)
[17:29] <Vjarjadian> i'm hoping i can integrate ceph with ESXi on pass through drives... or something like that
[17:32] <Vjarjadian> you tried running OSDs as VMs Machske?
[17:35] <Vjarjadian> might mean you could get more out of your hosts...
[17:36] * The_Bishop (~bishop@e179009244.adsl.alicedsl.de) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[17:38] * ninkotech_ (~duplo@89.177.137.231) has joined #ceph
[17:46] * scuttlemonkey (~scuttlemo@96-42-136-136.dhcp.trcy.mi.charter.com) has joined #ceph
[17:46] * ChanServ sets mode +o scuttlemonkey
[17:47] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) Quit (Quit: Leaving.)
[18:12] * scuttlemonkey (~scuttlemo@96-42-136-136.dhcp.trcy.mi.charter.com) Quit (Quit: This computer has gone to sleep)
[18:23] * The_Bishop (~bishop@2001:470:50b6:0:a8e1:4557:325b:389b) has joined #ceph
[18:23] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) has joined #ceph
[18:38] * schlitzer (~schlitzer@ip-109-90-143-216.unitymediagroup.de) has joined #ceph
[18:38] * Cube (~Cube@107.38.71.134) has joined #ceph
[18:39] <schlitzer> hey all, i�m just reading the documentation & trying out ceph with a few virtualisized hosts.
[18:40] <Vjarjadian> hows it going?
[18:40] <Vjarjadian> thats how i'm planning to test it
[18:40] <schlitzer> quite nice
[18:40] * scuttlemonkey (~scuttlemo@96-42-136-136.dhcp.trcy.mi.charter.com) has joined #ceph
[18:40] * ChanServ sets mode +o scuttlemonkey
[18:40] <schlitzer> dioing it with fc17
[18:40] <Vjarjadian> what OS you using for it?
[18:40] <schlitzer> fc17
[18:40] * scuttlemonkey (~scuttlemo@96-42-136-136.dhcp.trcy.mi.charter.com) Quit ()
[18:40] <Vjarjadian> ubuntu/fedora?
[18:41] <schlitzer> fedora 17^^
[18:41] <Vjarjadian> ah
[18:41] <Vjarjadian> how small have you been able to make the VMs to keep good bandwidth?
[18:42] <schlitzer> um, i was not testing it for performance. because the vm�s are running on commodity hardware
[18:43] <Vjarjadian> commodity hardware is still good :)
[18:43] <schlitzer> and most of the vm�s share the same disks, so performance is poo anyway :-D
[18:43] <schlitzer> poor^^
[18:44] <Machske> @Vjarjadian: OSD's are real machines :)
[18:44] <cephalobot`> Machske: Error: "Vjarjadian:" is not a valid command.
[18:44] <Machske> Vjarjadian: OSD's are real machines :)
[18:45] <Vjarjadian> i'm not a command... i'm insulted
[18:47] <schlitzer> but i was thinking how a production setup could look like. suggested i would buy a box that can handle 24 harddrives, 2 for system (raid1), 2ssd�s for journal & 20 spindles for the osd.
[18:48] <Vjarjadian> or buy 3 or 4 of those boxes
[18:48] <schlitzer> what would be the better approach, having 20 disks for its own, and 20 osd�s, so one osd for each disk? building 10x2 raid1 & 10 osd�s
[18:49] <Vjarjadian> or raid5/6 those servers
[18:49] <schlitzer> but raid5/6 is slow^^
[18:50] <Vjarjadian> depends on your performance requirements...
[18:50] <Vjarjadian> also allows more data stored than raid 10
[18:50] <schlitzer> yes
[18:51] <schlitzer> i think the main question i have is, should i do raid, or can ceph handle this for me
[18:51] <Vjarjadian> problem might be... with 20 OSDs, if your host went down... the likelyhood of some data being only stored on one of those 20 is higher and could lose it
[18:52] <schlitzer> yes, this is the riddle i try to solve
[18:52] <schlitzer> i think "CRUSH" is there to solve this...
[18:53] <Vjarjadian> so one OSD per host
[18:53] <Vjarjadian> with raid
[18:53] <schlitzer> hmmm, i�m not sure if this is needed
[18:54] <schlitzer> also in the docs there are statements where multiple osd�s are recommended if ext4/xfs is in use
[18:54] <schlitzer> (multiple osd per server)
[18:54] <Vjarjadian> i've still got lots of readin to do
[18:55] <Vjarjadian> mainly testing samba4 atm
[18:55] <schlitzer> :-)
[18:55] <Vjarjadian> if i could boot samba4 directly from Ceph i'd be even happier :)
[18:56] <schlitzer> ehhh? booting samba 4 from ceph?
[18:56] <schlitzer> you mean you store the image of the samba server in ceph?
[18:56] <schlitzer> our having a physical samba server with file storage in ceph?
[18:56] <Vjarjadian> yes... been doing some quick reading on iSCSI... looks promising... but i dont know yet
[19:00] <Vjarjadian> either way could work
[19:01] <Vjarjadian> ceph seems to give plenty of options
[19:02] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[19:04] * sagelap (~sage@76.89.177.113) has joined #ceph
[19:05] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) Quit (Quit: Leaving.)
[19:06] * Leseb (~Leseb@bea13-1-82-228-104-16.fbx.proxad.net) has joined #ceph
[19:12] <CloudGuy> hi all .. how do you use a rbd ? do you define it via /sys/bus/rbd/add, format and use it like a normal filesystem ?
[19:18] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[19:36] <Machske> rbd gives you a block device, so you could indeed format it and use it as a fs, or you could for example use it to use it as a virtual disk in a virtual machine
[19:36] * Cube (~Cube@107.38.71.134) Quit (Quit: Leaving.)
[19:44] * scuttlemonkey (~scuttlemo@96-42-136-136.dhcp.trcy.mi.charter.com) has joined #ceph
[19:44] * ChanServ sets mode +o scuttlemonkey
[19:45] * slang (~slang@cpe-66-91-114-250.hawaii.res.rr.com) Quit (Quit: slang)
[19:45] * slang (~slang@cpe-66-91-114-250.hawaii.res.rr.com) has joined #ceph
[19:49] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[19:49] * madkiss (~madkiss@178.188.60.118) Quit ()
[19:54] <Vjarjadian> machske, do you use ceph in that manor?
[19:55] * Aiken (~Aiken@2001:44b8:2168:1000:21f:d0ff:fed6:d63f) has joined #ceph
[19:56] * scuttlemonkey (~scuttlemo@96-42-136-136.dhcp.trcy.mi.charter.com) Quit (Quit: This computer has gone to sleep)
[20:02] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) has joined #ceph
[20:04] * slang (~slang@cpe-66-91-114-250.hawaii.res.rr.com) Quit (Quit: slang)
[20:04] * joao (~JL@89.181.157.91) has joined #ceph
[20:04] * ChanServ sets mode +o joao
[20:06] * slang (~slang@cpe-66-91-114-250.hawaii.res.rr.com) has joined #ceph
[20:13] * Leseb (~Leseb@bea13-1-82-228-104-16.fbx.proxad.net) Quit (Quit: Leseb)
[20:14] * The_Bishop (~bishop@2001:470:50b6:0:a8e1:4557:325b:389b) Quit (Read error: Operation timed out)
[20:17] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[20:17] * The_Bishop (~bishop@2001:470:50b6:0:a8e1:4557:325b:389b) has joined #ceph
[20:27] * CloudGuy (~CloudGuy@5356416B.cm-6-7b.dynamic.ziggo.nl) has joined #ceph
[20:32] * Cube (~Cube@c-24-10-25-199.hsd1.ca.comcast.net) has joined #ceph
[20:33] * slang (~slang@cpe-66-91-114-250.hawaii.res.rr.com) Quit (Quit: slang)
[20:38] * scuttlemonkey (~scuttlemo@96-42-136-136.dhcp.trcy.mi.charter.com) has joined #ceph
[20:38] * ChanServ sets mode +o scuttlemonkey
[20:48] * Leseb (~Leseb@bea13-1-82-228-104-16.fbx.proxad.net) has joined #ceph
[20:49] * Leseb (~Leseb@bea13-1-82-228-104-16.fbx.proxad.net) Quit ()
[20:49] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[20:56] * scuttlemonkey (~scuttlemo@96-42-136-136.dhcp.trcy.mi.charter.com) Quit (Quit: This computer has gone to sleep)
[21:12] * Oliver2 (~oliver1@pD95DAF0D.dip.t-dialin.net) Quit (Quit: Leaving.)
[21:50] * madkiss (~madkiss@178.188.60.118) has joined #ceph
[21:51] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[21:56] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:00] * Oliver2 (~oliver1@p548396A7.dip.t-dialin.net) has joined #ceph
[22:13] * madkiss (~madkiss@178.188.60.118) Quit (Quit: Leaving.)
[22:18] * ScOut3R (~ScOut3R@1F2E59B1.dsl.pool.telekom.hu) has joined #ceph
[22:19] <schlitzer> hmm, if i write data to a pool / rbd device, shouldn't there be any clones if i watch with "rados df"?
[22:19] <CloudGuy> hi all .. if say a hard drive , sata 7200 rpm has io 10, if there are say 10 hard drives ( osd ) , will the io be 10 x 10 = 100 ?
[22:21] <CloudGuy> in other words, if i take the normal sata hard drive with say 75 iops and have 10 servers in the pool, would the iops be increasing in the same fashion ?
[22:22] * Oliver2 (~oliver1@p548396A7.dip.t-dialin.net) Quit (Quit: Leaving.)
[22:22] * Oliver2 (~oliver1@p548396A7.dip.t-dialin.net) has joined #ceph
[22:23] * Oliver2 (~oliver1@p548396A7.dip.t-dialin.net) Quit ()
[22:24] * Oliver2 (~oliver1@p548396A7.dip.t-dialin.net) has joined #ceph
[22:29] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[22:32] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) has joined #ceph
[22:43] <iggy> CloudGuy: not linearly really, but pretty close (when you take into account replication, etc)
[22:45] * wschulze (~wschulze@cpe-98-14-23-162.nyc.res.rr.com) Quit (Quit: Leaving.)
[22:53] * ScOut3R (~ScOut3R@1F2E59B1.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[22:59] * Oliver2 (~oliver1@p548396A7.dip.t-dialin.net) Quit (Quit: Leaving.)
[23:01] * ScOut3R (~ScOut3R@1F2E59B1.dsl.pool.telekom.hu) has joined #ceph
[23:03] * ScOut3R (~ScOut3R@1F2E59B1.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[23:09] * stp__ (~stp@dslb-084-056-048-076.pools.arcor-ip.net) Quit (Quit: Leaving)
[23:26] * ScOut3R (~ScOut3R@1F2E59B1.dsl.pool.telekom.hu) has joined #ceph
[23:29] * ScOut3R (~ScOut3R@1F2E59B1.dsl.pool.telekom.hu) Quit (Remote host closed the connection)
[23:42] * rubitita01248 (~satu_sd@17.Red-81-32-147.dynamicIP.rima-tde.net) has joined #ceph
[23:42] <rubitita01248> Free sample of the new Carolina Herrera fragance http://www.carolinaherrera.com/212/es/areyouonthelist?share=2zkuHzwOxvy930fvZN7HOVc97XE-GNOL1fzysCqIoynkz4rz3EUUdzs6j6FXsjB4447F-isvxjqkXd4Qey2GHw#teaser
[23:45] * schlitzer (~schlitzer@ip-109-90-143-216.unitymediagroup.de) Quit (Ping timeout: 480 seconds)
[23:50] * danieagle (~Daniel@177.99.132.219) has joined #ceph
[23:50] * rubitita01248 (~satu_sd@17.Red-81-32-147.dynamicIP.rima-tde.net) Quit (autokilled: Spambot. Mail support@oftc.net with questions (2012-12-25 22:50:52))
[23:58] * madkiss (~madkiss@178.188.60.118) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.