#ceph IRC Log

Index

IRC Log for 2012-09-24

Timestamps are in GMT/BST.

[0:01] * loicd1 (~loic@magenta.dachary.org) has joined #ceph
[0:03] * slang (~slang@adsl-69-105-99-173.dsl.irvnca.pacbell.net) Quit (Quit: Leaving.)
[0:08] * ninkotech (~duplo@89.177.137.231) Quit (Quit: Konversation terminated!)
[0:15] * BManojlovic (~steki@195.13.166.253) has joined #ceph
[0:16] * SvenDowideit (~SvenDowid@203-206-171-38.perm.iinet.net.au) has joined #ceph
[0:21] * loicd1 (~loic@magenta.dachary.org) has left #ceph
[0:40] * danieagle (~Daniel@186.214.92.94) has joined #ceph
[0:56] * ibotty (~me@91-65-242-21-dynip.superkabel.de) Quit (Read error: Connection timed out)
[1:02] * steki-BLAH (~steki@bojanka.net) has joined #ceph
[1:05] * BManojlovic (~steki@195.13.166.253) Quit (Ping timeout: 480 seconds)
[1:17] * Ryan_Lane (~Adium@208.251.135.189) has joined #ceph
[1:17] * steki-BLAH (~steki@bojanka.net) Quit (Ping timeout: 480 seconds)
[1:18] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:27] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:40] * Ryan_Lane (~Adium@208.251.135.189) Quit (Quit: Leaving.)
[1:57] * pentabular (~sean@70.231.141.128) has joined #ceph
[2:09] * slang (~slang@adsl-68-126-60-252.dsl.irvnca.pacbell.net) has joined #ceph
[2:15] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:51] * lxo (~aoliva@lxo.user.oftc.net) Quit (Quit: later)
[2:51] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:54] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[3:00] * slang1 (~slang@adsl-69-105-97-206.dsl.irvnca.pacbell.net) has joined #ceph
[3:06] * slang (~slang@adsl-68-126-60-252.dsl.irvnca.pacbell.net) Quit (Ping timeout: 480 seconds)
[3:13] * slang1 (~slang@adsl-69-105-97-206.dsl.irvnca.pacbell.net) Quit (Ping timeout: 480 seconds)
[3:22] * danieagle (~Daniel@186.214.92.94) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[3:45] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[5:51] * John (~john@m8f0536d0.tmodns.net) has joined #ceph
[6:00] * John (~john@m8f0536d0.tmodns.net) Quit (Quit: Leaving)
[6:58] * pentabular (~sean@70.231.141.128) has left #ceph
[7:18] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[7:30] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[7:42] * maelfius (~mdrnstm@pool-71-160-33-115.lsanca.fios.verizon.net) has left #ceph
[8:02] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:02] * loicd1 (~loic@magenta.dachary.org) has joined #ceph
[8:02] * loicd (~loic@magenta.dachary.org) Quit ()
[8:05] * loicd1 (~loic@magenta.dachary.org) has left #ceph
[8:22] * EmilienM (~EmilienM@55.67.197.77.rev.sfr.net) has joined #ceph
[8:51] * gohko (~gohko@natter.interq.or.jp) Quit (Read error: Connection reset by peer)
[8:52] * gohko (~gohko@natter.interq.or.jp) has joined #ceph
[9:17] * verwilst (~verwilst@d5152FEFB.static.telenet.be) has joined #ceph
[9:18] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[9:26] * BManojlovic (~steki@87.110.183.173) has joined #ceph
[9:37] * MikeMcClurg (~mike@93-137-106-21.adsl.net.t-com.hr) has joined #ceph
[9:51] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[10:34] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[11:07] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[11:16] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[11:33] * ninkotech (~duplo@89.177.137.231) has joined #ceph
[11:35] * luckky (~73f2d7f6@2600:3c00::2:2424) has joined #ceph
[11:37] * andret (~andre@pcandre.nine.ch) Quit (Remote host closed the connection)
[11:37] * andret (~andre@pcandre.nine.ch) has joined #ceph
[11:40] * spicewiesel (~spicewies@static.60.149.40.188.clients.your-server.de) has joined #ceph
[11:40] <spicewiesel> hi all
[11:40] * luckky (~73f2d7f6@2600:3c00::2:2424) Quit (Quit: TheGrebs.com CGI:IRC (Ping timeout))
[11:40] * deepsa_ (~deepsa@122.172.35.201) has joined #ceph
[11:41] * deepsa (~deepsa@122.172.157.5) Quit (Ping timeout: 480 seconds)
[11:41] * deepsa_ is now known as deepsa
[11:44] * loicd (~loic@178.20.50.225) has joined #ceph
[12:24] * pentabular (~sean@70.231.141.128) has joined #ceph
[12:24] * guilhemfr (~guilhem@tui75-3-88-168-236-26.fbx.proxad.net) has joined #ceph
[12:24] * pentabular (~sean@70.231.141.128) Quit ()
[12:38] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[12:49] * deepsa_ (~deepsa@101.62.50.169) has joined #ceph
[12:51] * deepsa (~deepsa@122.172.35.201) Quit (Ping timeout: 480 seconds)
[12:51] * deepsa_ is now known as deepsa
[13:34] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[13:37] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:55] * coredumb (~coredumb@ns.coredumb.net) has joined #ceph
[13:55] <coredumb> Hello
[13:55] <coredumb> Is that normal that i don't find rbd kernel module in a vanilla tree?
[13:57] <coredumb> oh maybe i'm just blind :)
[14:04] <coredumb> it's just not showing in menuconfig for some reasons...
[14:04] * holden247 (~rdannert@p5B16A974.dip.t-dialin.net) has joined #ceph
[14:23] * deepsa_ (~deepsa@122.172.27.196) has joined #ceph
[14:29] * deepsa (~deepsa@101.62.50.169) Quit (Ping timeout: 480 seconds)
[14:29] * deepsa_ is now known as deepsa
[14:35] * Norman (53a31f10@ircip3.mibbit.com) has joined #ceph
[14:38] <Norman> Hi guys, we are looking in to rolling out a 50TB Storage cluster, what would be the advised setup regarding nodes? Would 2 nodes with 24 bays x 3TB do the job, or would this not be "safe" enough? does Ceph need more machines?
[14:47] * MikeMcClurg (~mike@93-137-106-21.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[14:48] <Norman> On a side note, some of the people are leaning more towards using Swift for our needs. What would be the Pro's and Con's of Ceph compared to Swift?
[14:53] * aliguori (~anthony@cpe-70-123-140-180.austin.res.rr.com) has joined #ceph
[15:09] * SvenDowideit (~SvenDowid@203-206-171-38.perm.iinet.net.au) Quit (Read error: Operation timed out)
[15:16] * deepsa_ (~deepsa@115.184.110.249) has joined #ceph
[15:17] * deepsa (~deepsa@122.172.27.196) Quit (Ping timeout: 480 seconds)
[15:17] * deepsa_ is now known as deepsa
[15:30] * loicd (~loic@178.20.50.225) Quit (Quit: Leaving.)
[15:30] * lofejndif (~lsqavnbok@1GLAAAF9T.tor-irc.dnsbl.oftc.net) has joined #ceph
[15:34] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[15:35] * loicd (~loic@178.20.50.225) has joined #ceph
[15:53] * cblack101 (c0373624@ircip2.mibbit.com) has joined #ceph
[15:56] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) Quit (Quit: Leaving.)
[15:59] * lofejndif (~lsqavnbok@1GLAAAF9T.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[16:03] <nhm> Norman: Heya, 2 nodes with 24 drives would be pretty much the minimum you could get away with. A better setup would be 4 nodes with 12 drives each if you can swing it.
[16:09] <Norman> Hi nhm thnx, would that be performance wise or data reliability? Would the cluster be intact if a node goes down in a two node setup?
[16:09] <Norman> offcourse when I say "intact" I mean would it still run in degraded form :)
[16:11] <jamespage> what's the state of the upstart support in 0.48.2? in 0.48 it was not recommended but I note fixes/improvements in 0.48.2...
[16:20] * holden247 (~rdannert@p5B16A974.dip.t-dialin.net) Quit (Quit: Leaving.)
[16:24] <loicd> sileht: is there a URL for the ceph deployment script you're writing ? Even if it's still in progress I'd love to take a look :-)
[16:27] <sileht> loicd not yet
[16:28] <nhm> Norman: It'll still run assuming the other node is functional, but at that point all of your eggs will be in one basket. I'm not sure if by default it will try to re-replicate data to OSDs on the remaining node. Either way you'll be in kind of a bad place. Either You'll re-replicate the entire set of data (assuming there is free space), or you'll only have 1 copy of the data remaining.
[16:28] <loicd> sileht: ok. Let me know when there is, I will be happy to review and comment.
[16:29] <nhm> Norman: As far as performance goes, the testing I've been doing lately seems to suggest that a given node will top out at about 1.3GB/s when discounting the network.
[16:29] <sileht> loicd, it's just a script to build the ceph.conf and start mkcephfs for our test platforms ;)
[16:32] * lofejndif (~lsqavnbok@04ZAAFMJR.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:33] * BManojlovic (~steki@87.110.183.173) Quit (Quit: Ja odoh a vi sta 'ocete...)
[16:34] <nhm> Norman: granted, that's not involving RGW, which will introduce overhead.
[16:42] <Norman> nhm: Well that would be fast enough, we will run on a gbit network first anyway.
[16:46] * EmilienM (~EmilienM@55.67.197.77.rev.sfr.net) Quit (Read error: No route to host)
[16:48] <guilhemfr> hi all
[16:48] <guilhemfr> I have a problem with "rados cppool"
[16:48] <guilhemfr> I create a new pool with more pgs, and I want to transfert from the old to the new with cppool
[16:49] <guilhemfr> and here is my error :
[16:49] <guilhemfr> error copying object: No such file or directory
[16:49] <guilhemfr> error copying pool ys-streaming => ys-streaming-new: No such file or directory
[16:50] <guilhemfr> the first 10 files are working before this error on "ys-streaming:18__shadow_8d9bfdc60d1e5b57bddce191da38e8d30cc25c90.png._IoNls-xb2dyaQ9uzHaE6IRzWpaWg1zx(@18_8d9bfdc60d1e5b57bddce191da38e8d30cc25c90.png) => ys-streaming-new:18__shadow_8d9bfdc60d1e5b57bddce191da38e8d30cc25c90.png._IoNls-xb2dyaQ9uzHaE6IRzWpaWg1zx(@18_8d9bfdc60d1e5b57bddce191da38e8d30cc25c90.png)"
[16:52] <guilhemfr> https://gist.github.com/cc27ae5a8c2d178501bc
[16:54] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[16:58] <nhm> Norman: ok. Do keep in mind that with RGW there will be significant performance overhead if doing small IO, especially if doing a lot of small IO to a single bucket.
[16:58] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) has joined #ceph
[16:58] <nhm> There are some tweaks that potentially can improve that if this is a big consideration.
[17:00] * EmilienM (~EmilienM@55.67.197.77.rev.sfr.net) has joined #ceph
[17:12] * Norman (53a31f10@ircip3.mibbit.com) Quit (Quit: http://www.mibbit.com ajax IRC Client)
[17:13] * slang (~slang@2607:f298:a:607:9911:cd67:ceed:4ee8) has joined #ceph
[17:16] * Cube (~Adium@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[17:26] <joao> has LA moved away from PDT?
[17:28] <joao> oh, nevermind
[17:28] <joao> my math is terrible today
[17:37] * gregaf (~Adium@2607:f298:a:607:e920:3b6d:3a02:2ffc) has joined #ceph
[17:46] * mkampe (~markk@2607:f298:a:607:222:19ff:fe31:b5d3) has joined #ceph
[17:47] * yehuda_hm (~yehuda@99-48-179-68.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[18:01] * glowell (~Adium@38.122.20.226) has joined #ceph
[18:21] * loicd (~loic@178.20.50.225) Quit (Ping timeout: 480 seconds)
[18:36] * deepsa_ (~deepsa@122.172.161.4) has joined #ceph
[18:37] * deepsa (~deepsa@115.184.110.249) Quit (Ping timeout: 480 seconds)
[18:37] * deepsa_ is now known as deepsa
[18:40] * sjust (~sam@2607:f298:a:607:baac:6fff:fe83:5a02) has joined #ceph
[18:43] * sagelap (~sage@129.sub-174-254-80.myvzw.com) has joined #ceph
[18:47] * Tv_ (~tv@2607:f298:a:607:912:1bb:3a6b:cca2) has joined #ceph
[18:48] * Cube (~Adium@12.248.40.138) has joined #ceph
[18:50] * joshd (~joshd@2607:f298:a:607:221:70ff:fe33:3fe3) has joined #ceph
[18:52] <Tv_> now there's a scary log message.. Sep 24 09:48:28 dreamer NetworkManager[1152]: <info> (eth0): writing resolv.conf to /sbin/resolvconf
[18:55] <joao> oh wow
[18:55] <joao> how's that supposed to work?
[18:57] <mikeryan> incredible, makes me happy i've avoided gnome like the plague for a decade now
[18:58] <nhm> gnome's network manager stuff is kind of insane.
[18:59] * sagelap (~sage@129.sub-174-254-80.myvzw.com) Quit (Ping timeout: 480 seconds)
[19:00] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[19:02] * yehudasa (~yehudasa@2607:f298:a:607:484d:925a:d3d:e9ca) has joined #ceph
[19:12] * sagelap (~sage@2600:1013:b01b:d236:64c8:48b:8f5f:4504) has joined #ceph
[19:12] * dmick (~dmick@2607:f298:a:607:b46e:6310:b0cc:f34) has joined #ceph
[19:15] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) has joined #ceph
[19:19] * ajm (~ajm@adam.gs) Quit (Quit: ajm)
[19:20] * sagelap1 (~sage@38.122.20.226) has joined #ceph
[19:21] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Read error: Operation timed out)
[19:21] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[19:22] * ajm (~ajm@adam.gs) has joined #ceph
[19:22] * sagelap (~sage@2600:1013:b01b:d236:64c8:48b:8f5f:4504) Quit (Ping timeout: 480 seconds)
[19:35] <cblack101> Can someone point me to the URL with steep-by-step for uninstalling CEPH from a client including the rbd components... I need to upgrade an old one and I figured a complete unistall is appropriate.
[19:36] <dmick> not sure there really is such a URL cblack101; for correctness stopping the daemons and disabling any /etc/init or init.d stuff should be sufficient.
[19:49] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) has joined #ceph
[19:55] <elder> 1 petabit per second transmission over 50 km over a 12-optical-core cable announced last Friday. That works out to about 10 GB/sec per optical line.
[19:55] <elder> No, 10 TB/sec
[19:55] <elder> (But what's a factor of 1000 here or there.)
[19:55] <gregaf> that's got to be more than one cable per "optical core"
[19:56] <gregaf> unless it's really announcing a 20x improvement from current signals by hiding it inside a big aggregate grouping?
[19:56] <elder> http://www.ntt.co.jp/news2012/1209e/120920a.html
[19:56] <elder> I don't know, I just scanned it.
[19:59] <Tv_> doing over arbitrary cable conditions in real world deployments is a totally different challenge, though
[19:59] <Tv_> *doing that
[19:59] <gregaf> well, they say you can get 1 terabit/s out of a single core
[19:59] <gregaf> so...dunno
[20:00] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[20:00] <Tv_> yeah so this is 1000/12-fold improvement in lab circumstances vs real world
[20:00] <Tv_> say 100 to make it round -- i'll easily believe that
[20:02] * phantomcircuit (~phantomci@92.40.253.238.threembb.co.uk) has joined #ceph
[20:04] <gregaf> no, I mean they're saying people can already get 1 terabit/s out of a single core, so the interesting bit here (honest-to-god) is that they stuck a bunch of them together (attached to a single optical processor thingy, it looks like?)
[20:04] <phantomcircuit> i know this sounds illy but... is ceph stable?
[20:04] <phantomcircuit> silly*
[20:04] <Tv_> phantomcircuit: http://ceph.com/docs/master/faq/#is-ceph-production-quality
[20:04] * Ryan_Lane (~Adium@216.38.130.162) has joined #ceph
[20:06] <phantomcircuit> Tv_, if im parsing that correctly, the storage system is stable but the filesystem is still undergoing qa
[20:06] <Tv_> yup
[20:07] <phantomcircuit> so using the rdb interface would be stable
[20:07] <dmick> phantomcircuit: yes, so rbd and the rados gateway are considered quite solid
[20:07] <phantomcircuit> glad to hear it
[20:08] <phantomcircuit> my current block device setup is starting to show it's uh poorly thought out nature :)
[20:08] <darkfaded> md on top of /dev/ndb and then lvm-striped? :)
[20:09] <phantomcircuit> darkfaded, even worse...
[20:09] <phantomcircuit> local disk with periodic rsync
[20:10] <phantomcircuit> it's totally not acceptable but it's for a very low rent vps system (i actually warm people to assume fsync is ignored)
[20:10] <darkfaded> hehe
[20:10] <dmick> you can probably get slightly better consistency with Ceph :)
[20:11] <phantomcircuit> i've actually got flashcache running locally so fsync is relatively safe on the vps's
[20:11] <phantomcircuit> however if that machine dies then things are just rsync backups
[20:11] <phantomcircuit> which is pretty lame
[20:11] * maelfius (~mdrnstm@adsl-99-16-51-31.dsl.lsan03.sbcglobal.net) has joined #ceph
[20:11] <phantomcircuit> it would take like a week to restore from backup...
[20:12] <darkfaded> are they file-in-the vps level?
[20:13] <phantomcircuit> darkfaded, no they're all qcow2 files
[20:13] <darkfaded> otherwise you can use dd tunneled through netcat for the restore, just as an example
[20:13] <darkfaded> ah, yeah then that would be helpful
[20:13] <darkfaded> but i'm not in any way saying the current setup should stay the way it is lol
[20:13] <phantomcircuit> darkfaded, right but the restore would be over 1gbps ethernet :)
[20:14] <darkfaded> yummy.
[20:14] * cattelan (~cattelan@2001:4978:267:0:21c:c0ff:febf:814b) has joined #ceph
[20:15] * cattelan (~cattelan@2001:4978:267:0:21c:c0ff:febf:814b) Quit ()
[20:15] <darkfaded> at a friend's we backup the vm snapshots to a very slow readynas device
[20:15] * cattelan (~cattelan@2001:4978:267:0:21c:c0ff:febf:814b) has joined #ceph
[20:15] <darkfaded> so i'm writing out compressed, because that way it can be read back at almost useful speed
[20:16] <gregaf> I've got a readynas at home that can support 10MB/s writes
[20:16] <gregaf> *cry*
[20:16] <darkfaded> i think write is somewhere around 10MB/s and readind (and uncompressing) is 85MB/s
[20:16] <darkfaded> gregaf: seems about same hehe
[20:16] <darkfaded> i don't know how they make them be so slow
[20:16] <gregaf> I don't think my reads are much faster than that, though I could be wrong
[20:17] <darkfaded> gregaf: raw read speed on ours is maybe 25mb/s
[20:17] <darkfaded> with sunshine and all fluff
[20:17] <gregaf> ah, it could be that I can read that fast
[20:17] <gregaf> although I've seen it stutter occasionally when playing back HD videos, but that might just be if I've got something else being busy on it
[20:18] <darkfaded> io-heavy commands like "touch test"
[20:25] <phantomcircuit> i think i'll setup a cluster at home first to make sure i can make this all work :)
[20:25] <phantomcircuit> thanks for the info
[20:25] <phantomcircuit> bye
[20:25] * phantomcircuit (~phantomci@92.40.253.238.threembb.co.uk) has left #ceph
[20:29] * andreask (~andreas@chello062178013131.5.11.vie.surfer.at) has joined #ceph
[20:33] * Tamil (~Adium@2607:f298:a:607:6487:5c8d:9d00:e1ba) has joined #ceph
[20:33] * phantomcircuit (~phantomci@92.40.253.238.threembb.co.uk) has joined #ceph
[20:33] <phantomcircuit> one more question
[20:34] <phantomcircuit> if a block gets written out as all null will it get dropped or stored as all null
[20:34] <phantomcircuit> ie does it still use disk space
[20:36] * MikeMcClurg (~mike@93-137-178-252.adsl.net.t-com.hr) has joined #ceph
[20:38] <dmick> the object store supports sparse objects
[20:38] <gregaf> yeah, but does it convert (large enough groups of) zeros into sparseness?
[20:38] <gregaf> I recall work being done on sparseness, but I don't think it'll auto-detect it
[20:39] <dmick> just testing
[20:39] <phantomcircuit> hmm
[20:40] <phantomcircuit> so basically i would need to arrange for disk space to be ever expanding upto the maximum size
[20:40] <elder> I don't expect any file system backing an OSD to automatically turn a block of all zeroes written into a hole in the file.
[20:41] * jlogan (~Thunderbi@2600:c00:3010:1:8514:9a09:8246:758c) has joined #ceph
[20:41] <elder> There's no reason one couldn't set up something that would do that (i.e., from user space) but I don't believe it'll happen otherwise.
[20:41] <joshd> phantomcircuit: there is discard support through qemu, if you're using qemu/kvm
[20:42] <joshd> then your guests can punch holes or mount ext4 with -o discard and reclaim space
[20:42] <dmick> and I think that bytes that are not written are not written, right?...so seeking to 10M and writing one byte does not write 10M+1 bytes
[20:43] <dmick> (in the case of rbd it's going to be about the stripe size, but in general)
[20:43] <phantomcircuit> joshd, oh really? that's really nice
[20:44] <phantomcircuit> dmick, right but if you have a vm that writes 10 MB and then deletes it typically that 10 MB is still allocated and essentially unrecoverable
[20:44] <dmick> yes
[20:44] <phantomcircuit> currently im using qcow2 images on the local disk which has that problem of ever expanding disk usage
[20:44] <dmick> but that's not writing zeros. but yeah, discard support is what you're most interesting in probably
[20:44] <dmick> *interested
[20:45] <phantomcircuit> there's a few hacks with qemu that have to do with null blocks
[20:45] <phantomcircuit> so i was just wondering if it was similar
[20:45] <phantomcircuit> glad to hear there's discard support
[20:45] <dmick> yes, there are hacks in tar too IIRC
[20:46] <dmick> (and cp, and cpio)
[20:47] * loicd (~loic@jem75-2-82-233-234-24.fbx.proxad.net) Quit (Quit: Leaving.)
[20:52] * jlogan (~Thunderbi@2600:c00:3010:1:8514:9a09:8246:758c) Quit (Ping timeout: 480 seconds)
[20:58] * Tamil (~Adium@2607:f298:a:607:6487:5c8d:9d00:e1ba) has left #ceph
[21:05] * guerby (~guerby@nc10d-ipv6.tetaneutral.net) Quit (Read error: No route to host)
[21:05] * guerby (~guerby@nc10d-ipv6.tetaneutral.net) has joined #ceph
[21:06] <elder> Tv_, would it be possible for me to direct teuthology to get my kernel images from a particular ubuntu path?
[21:07] <elder> Like, tell it "load up my machines with the kernel from here: http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.5.4-quantal/"
[21:08] * loicd (~loic@magenta.dachary.org) has joined #ceph
[21:09] * Cube (~Adium@12.248.40.138) Quit (Quit: Leaving.)
[21:09] <dmick> elder: it's only code, right? :)
[21:11] <joshd> elder: right now kernel.py just downloads debs
[21:11] <joshd> if you want to use a ppa, it's easier to do it manually right now
[21:12] <elder> OK. I was more asking whether it was possible now. Answer is "no."
[21:12] <dmick> yeah, I was just pondering how hard the piping would be from repo to .deb
[21:12] <dmick> and yes, what joshd said
[21:13] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Read error: Operation timed out)
[21:21] * phantomcircuit (~phantomci@92.40.253.238.threembb.co.uk) Quit (Quit: Leaving)
[21:42] * Cube (~Adium@184-231-7-193.pools.spcsdns.net) has joined #ceph
[21:45] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[21:55] * BManojlovic (~steki@195.13.166.253) has joined #ceph
[22:03] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[22:03] * loicd (~loic@magenta.dachary.org) has joined #ceph
[22:26] * Cube (~Adium@184-231-7-193.pools.spcsdns.net) Quit (Quit: Leaving.)
[22:29] * pentabular (~sean@adsl-70-231-141-128.dsl.snfc21.sbcglobal.net) has joined #ceph
[22:34] * maelfius (~mdrnstm@adsl-99-16-51-31.dsl.lsan03.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[22:47] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[23:04] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[23:04] * loicd (~loic@magenta.dachary.org) has joined #ceph
[23:05] * jamespage (~jamespage@tobermory.gromper.net) Quit (Quit: Coyote finally caught me)
[23:17] * slang (~slang@2607:f298:a:607:9911:cd67:ceed:4ee8) Quit (Ping timeout: 480 seconds)
[23:24] * pentabular (~sean@adsl-70-231-141-128.dsl.snfc21.sbcglobal.net) Quit (Remote host closed the connection)
[23:24] * pentabular (~sean@adsl-70-231-141-128.dsl.snfc21.sbcglobal.net) has joined #ceph
[23:39] * pentabular (~sean@adsl-70-231-141-128.dsl.snfc21.sbcglobal.net) Quit (Quit: pentabular)
[23:44] * slang (~slang@2607:f298:a:607:5cbf:67fd:ead2:6f0f) has joined #ceph
[23:52] * Cube (~Adium@12.248.40.138) has joined #ceph
[23:55] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Read error: Connection reset by peer)
[23:58] * allsystemsarego (~allsystem@188.27.164.159) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.