#ceph IRC Log

Index

IRC Log for 2015-02-11

Timestamps are in GMT/BST.

[0:00] * ircolle-afk is now known as ircolle
[0:01] * georgem (~Adium@184.151.190.211) Quit (Quit: Leaving.)
[0:01] <gleam> keep in mind you can't put rbd images directly onto an ec pool, you have to have a replicated cache tier on top
[0:06] * reed (~reed@net-93-144-229-167.cust.dsl.teletu.it) Quit (Quit: Ex-Chat)
[0:08] * rohanm (~rohanm@mobile-166-173-185-157.mycingular.net) has joined #ceph
[0:09] * rohanm (~rohanm@mobile-166-173-185-157.mycingular.net) has left #ceph
[0:09] * rohanm (~rohanm@mobile-166-173-185-157.mycingular.net) has joined #ceph
[0:11] * OutOfNoWhere (~rpb@76.8.45.168) has joined #ceph
[0:12] * azhariitd (~azhariitd@1.39.35.30) has joined #ceph
[0:13] <azhariitd> Hello
[0:13] * azhariitd (~azhariitd@1.39.35.30) Quit (Read error: Connection reset by peer)
[0:15] * jaank (~quassel@98.215.50.223) has joined #ceph
[0:18] * azhariitd (~azhariitd@1.39.35.30) has joined #ceph
[0:18] * azhariitd (~azhariitd@1.39.35.30) Quit (Read error: Connection reset by peer)
[0:25] * azhariitd (~azhariitd@1.39.35.30) has joined #ceph
[0:26] * AndroUser (~androirc@1.39.33.115) has joined #ceph
[0:26] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[0:26] <azhariitd> Hi
[0:26] <AndroUser> Hii azhariitd
[0:26] <AndroUser> Hi azhar
[0:27] <azhariitd> Hi,please help me on adding new monitor to cluster
[0:28] <AndroUser> Hi
[0:29] <AndroUser> I will help u azhariitd
[0:29] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[0:30] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[0:30] * togdon (~togdon@74.121.28.6) Quit (Ping timeout: 480 seconds)
[0:30] * AndroUser (~androirc@1.39.33.115) has left #ceph
[0:30] * azhariitd (~azhariitd@1.39.35.30) Quit (Read error: Connection reset by peer)
[0:31] * azhariitd (~azhariitd@1.39.35.30) has joined #ceph
[0:31] * bhaveshiitdelhi (~androirc@1.39.33.115) has joined #ceph
[0:33] * azhariitd (~azhariitd@1.39.35.30) Quit (Read error: Connection reset by peer)
[0:34] * bhaveshiitdelhi (~androirc@1.39.33.115) Quit (Quit: AndroIRC - Android IRC Client ( http://www.androirc.com ))
[0:35] * fsimonce (~simon@host217-37-dynamic.30-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:35] * togdon (~togdon@74.121.28.6) has joined #ceph
[0:35] * jlavoy (~Adium@173.227.74.5) Quit (Ping timeout: 480 seconds)
[0:37] * LeaChim (~LeaChim@host86-159-236-51.range86-159.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[0:44] * mikedawson (~chatzilla@98.227.179.172) has joined #ceph
[0:46] * mozg (~andrei@37.220.104.190) Quit (Quit: Ex-Chat)
[0:47] * xahare_ (~pixel@cpe-23-241-195-16.socal.res.rr.com) has joined #ceph
[0:48] * xahare (~pixel@cpe-23-241-195-16.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:00] * palmeida (~palmeida@gandalf.wire-consulting.com) Quit (Quit: Lost terminal)
[1:01] <cholcombe973> gregsfortytwo: you around?
[1:01] * dmsimard is now known as dmsimard_away
[1:01] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Quit: Away)
[1:01] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[1:02] <gregsfortytwo> uh, sort of?
[1:03] <cholcombe973> do you have more info on this: http://comments.gmane.org/gmane.comp.file-systems.ceph.user/14804
[1:03] <cholcombe973> i'm running into the same thing
[1:03] * scuttle|afk is now known as scuttlemonkey
[1:03] <cholcombe973> i'm symlinking /var/lib/ceph/osd/ceph-0 -> /mnt/sdb for example and when my osds come up they complain about mismatched ids
[1:04] <gregsfortytwo> only what I said about the symlinks are probably unstable and pointing at different hard drives than they were previously
[1:04] <cholcombe973> it's odd because if i rerun the ceph mkfs command it works fine the second time around
[1:05] * rohanm (~rohanm@mobile-166-173-185-157.mycingular.net) Quit (Ping timeout: 480 seconds)
[1:06] <cholcombe973> should i not be symlinking and just mount the drive into /var/lib/ceph/osd/ceph-X ?
[1:06] * jaank (~quassel@98.215.50.223) Quit (Ping timeout: 480 seconds)
[1:07] <gregsfortytwo> I don't do admin stuff, but I gather that /dev/sdX symlinks are often unstable
[1:07] <seapasulli> how can you see what EC profile a pool is using?
[1:08] <gregsfortytwo> linux provides stable symlinks based on drive uuids or something which you can use instead, and those won't change
[1:08] <gregsfortytwo> that's probably what you want to do
[1:08] <gregsfortytwo> but this is really not my thing
[1:08] <cholcombe973> ok
[1:08] <cholcombe973> ok thanks for the info
[1:11] <seapasulli> cholcombe973: you should also be able to get a list of uuids for each device you are mounting via blkid /dev/${device}${partition}
[1:11] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[1:11] <seapasulli> which should give you the uuid you need to use in fstab
[1:11] <cholcombe973> good point
[1:12] <cholcombe973> i suspect that ceph-osd mkfs is getting screwed up by the symlink
[1:12] <seapasulli> probably. I know ceph-deploy sets it up to mount directly to /var/lib/ceph/osd/ceph-${osdid}
[1:12] <cholcombe973> yeah i'm going to do that to see if it helps
[1:12] <seapasulli> GL ^_^
[1:13] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[1:14] * oms101 (~oms101@p20030057EA225E00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:16] * togdon (~togdon@74.121.28.6) has joined #ceph
[1:17] * ircolle (~Adium@2601:1:a580:145a:139:9095:56de:8570) Quit (Quit: Leaving.)
[1:19] * schmee (~quassel@phobos.isoho.st) Quit (Ping timeout: 480 seconds)
[1:23] * oms101 (~oms101@p20030057EA1A5C00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:27] * mdxi_ (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) Quit (Quit: leaving)
[1:31] * mdxi (~mdxi@50-199-109-154-static.hfc.comcastbusiness.net) has joined #ceph
[1:37] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[1:37] * lcurtis (~lcurtis@47.19.105.250) Quit (Ping timeout: 480 seconds)
[1:37] * jaank (~quassel@98.215.50.223) has joined #ceph
[1:40] * schmee (~quassel@phobos.isoho.st) has joined #ceph
[1:44] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[1:45] * togdon (~togdon@74.121.28.6) Quit (Quit: Textual IRC Client: www.textualapp.com)
[1:47] * guoozz (~guoozz@223.81.204.252) has joined #ceph
[1:48] * guoozz (~guoozz@223.81.204.252) Quit ()
[1:50] <steveeJ> how much impact do monitors have on performance?
[1:51] <steveeJ> if at all, what would be the significant factor?
[1:56] * bandrus (~brian@54.sub-70-211-78.myvzw.com) has joined #ceph
[2:00] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:02] <blahnana> steveeJ, do you mean, how much cpu/ram does a monitor use?
[2:02] <blahnana> or do you mean, "can I run a monitor on a raspberrypi for my 48 OSD cluster"
[2:04] <steveeJ> blahnana: the latter would be nice to know
[2:05] <steveeJ> since monitors have a vital functionality in ceph it would be nice to have plenty of them, but if slow devices make the cluster slow it would be contra productive to a certain degree
[2:06] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) has joined #ceph
[2:06] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[2:09] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:10] * rohanm (~rohanm@mobile-166-173-185-157.mycingular.net) has joined #ceph
[2:14] * nitti (~nitti@c-66-41-30-224.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[2:15] * shyu (~shyu@119.254.196.66) has joined #ceph
[2:15] * sputnik13 (~sputnik13@74.202.214.170) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:21] * sudocat (~davidi@192.185.1.20) Quit (Ping timeout: 480 seconds)
[2:23] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[2:30] * destrudo (~destrudo@64.142.74.180) Quit (Ping timeout: 480 seconds)
[2:31] * rohanm (~rohanm@mobile-166-173-185-157.mycingular.net) Quit (Ping timeout: 480 seconds)
[2:33] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:40] * kefu (~kefu@114.92.113.105) has joined #ceph
[2:41] * zhaochao (~zhaochao@111.161.77.232) has joined #ceph
[2:46] * swami1 (~swami@223.227.108.129) has joined #ceph
[2:51] * swami1 (~swami@223.227.108.129) Quit (Quit: Leaving.)
[2:54] * cholcombe973 (~chris@7208-76ef-ff1f-ed2f-329a-f002-3420-2062.6rd.ip6.sonic.net) has left #ceph
[2:59] * Concubidated (~Adium@2607:f298:b:635:2c52:caab:e95c:9201) Quit (Ping timeout: 480 seconds)
[3:01] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit (Quit: Leaving.)
[3:04] * rmoe (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) has joined #ceph
[3:08] * calvinx (~calvin@76.164.201.51) has joined #ceph
[3:09] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[3:12] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit ()
[3:22] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[3:24] * PingKuo (~ping@123.51.160.200) has joined #ceph
[3:24] * capri (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[3:26] * elder_ (~elder@210.177.145.249) has joined #ceph
[3:27] * kefu_ (~kefu@114.86.208.30) has joined #ceph
[3:29] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Quit: Away)
[3:29] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[3:29] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit ()
[3:29] * macjack (~Thunderbi@123.51.160.200) Quit (Ping timeout: 480 seconds)
[3:30] * kefu (~kefu@114.92.113.105) Quit (Ping timeout: 480 seconds)
[3:33] * sjm (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[3:35] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[3:39] * macjack (~Thunderbi@123.51.160.200) has joined #ceph
[3:39] * zhithuang (~zhithuang@202.76.244.5) has joined #ceph
[3:40] * zhithuang is now known as winston-d_
[3:45] * PingKuo (~ping@123.51.160.200) Quit (Ping timeout: 480 seconds)
[3:53] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:55] * kefu_ (~kefu@114.86.208.30) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[3:58] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[3:58] * kefu (~kefu@114.86.208.30) has joined #ceph
[3:59] * joef (~Adium@2601:9:280:f2e:c9bb:1292:8dcf:daba) has joined #ceph
[4:04] * lcurtis (~lcurtis@ool-18bfec0b.dyn.optonline.net) has joined #ceph
[4:21] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit (Quit: Leaving.)
[4:24] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[4:24] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has left #ceph
[4:27] * sudocat (~davidi@2601:e:2b80:9920:db7:243f:5a79:2e9a) has joined #ceph
[4:30] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[4:38] * winston-1_ (~zhithuang@116.227.70.237) has joined #ceph
[4:41] * winston-d_ (~zhithuang@202.76.244.5) Quit (Ping timeout: 480 seconds)
[4:43] * winston-1_ (~zhithuang@116.227.70.237) Quit ()
[4:47] * joef (~Adium@2601:9:280:f2e:c9bb:1292:8dcf:daba) has left #ceph
[4:50] * VisBits (~VisBits@cpe-174-101-246-167.cinci.res.rr.com) has joined #ceph
[4:50] <VisBits> Should ceph.conf be identical on all mon,mds,osd ect?
[4:51] <joshd> easier to manage it that way, but it doesn't have to be
[4:51] <VisBits> is it recommended to edit on the primary monitor and use "ceph-deploy config push hostname" to store the config?
[4:52] <VisBits> to osds mds ect
[4:53] <VisBits> I find that the documentation is conflicting as to what needs defined in the ceph.conf to have a reliable functional cluster, the mon_host / mon_initi_members conflict with settings specific for the monitors in the examples provided.. help? lol
[4:54] <joshd> mon initial members is just used by the monitors when first creating the cluster. mon host is read by everything else after that to find the monitors
[4:55] <VisBits> whats the proper way to suggest changes to ceph documentation?
[4:56] <joshd> you can file a bug in http://tracker.ceph.com, or a pull request against the docs in the source tree (http://github.com/ceph/ceph/tree/master/docs)
[4:57] * vbellur (~vijay@122.167.168.113) Quit (Ping timeout: 480 seconds)
[4:57] <VisBits> thank you joshd for the help! :-)
[4:57] <joshd> you're welcome!
[4:58] <VisBits> oh one more thing, do you have any recommended practice of organization for keeping track of osd location etc. Is it possible to name osd based on their host? node01.disk1 as the osd name?
[4:59] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[5:00] <joshd> unfortunately no, each osd just gets the next available number when it's created. you can see where they are with 'ceph osd tree' though
[5:00] <VisBits> do i need to manually edit ceph.conf every time i create an osd and add the line in there?
[5:02] <joshd> no, the minimal config is static unless you're adding/removing monitors
[5:03] <VisBits> monitors are mon.a-z?
[5:04] <joshd> they can have arbitrary names, people often use the host they're on
[5:06] * swami1 (~swami@49.32.0.159) has joined #ceph
[5:10] * PingKuo (~ping@123.51.160.200) has joined #ceph
[5:12] * Vacuum (~vovo@88.130.204.38) has joined #ceph
[5:17] * jaank (~quassel@98.215.50.223) Quit (Ping timeout: 480 seconds)
[5:17] <VisBits> joshd, should my mon_initial_members list the primary monitor, then the config file list the rest of them? is that the desired configuration?
[5:19] * Vacuum_ (~vovo@88.130.197.210) Quit (Ping timeout: 480 seconds)
[5:19] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[5:20] * amote (~amote@121.244.87.116) has joined #ceph
[5:20] <joshd> VisBits: list all the ones you're starting out with in both places
[5:21] <VisBits> okay thanks. :-)
[5:21] * sudocat (~davidi@2601:e:2b80:9920:db7:243f:5a79:2e9a) Quit (Ping timeout: 480 seconds)
[5:25] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) has joined #ceph
[5:26] * segutier (~segutier@67.142.235.252) has joined #ceph
[5:28] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[5:39] * OutOfNoWhere (~rpb@76.8.45.168) Quit (Ping timeout: 480 seconds)
[5:45] * mykola (~Mikolaj@91.225.200.48) has joined #ceph
[5:51] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[5:51] * puffy (~puffy@50.185.218.255) has joined #ceph
[5:52] * swami2 (~swami@49.32.0.159) has joined #ceph
[5:54] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[5:58] * swami1 (~swami@49.32.0.159) Quit (Ping timeout: 480 seconds)
[5:59] * rwheeler (~rwheeler@bzq-80-62-194.red.bezeqint.net) has joined #ceph
[5:59] * rwheeler (~rwheeler@bzq-80-62-194.red.bezeqint.net) Quit ()
[6:00] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[6:01] * aarcane_ (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) Quit (Quit: Leaving)
[6:03] * kefu is now known as kefu|afk
[6:05] * rdas (~rdas@121.244.87.116) has joined #ceph
[6:09] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[6:13] * kefu|afk (~kefu@114.86.208.30) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:17] * lcurtis (~lcurtis@ool-18bfec0b.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[6:18] * cooldharma06 (~chatzilla@218.248.25.100) has joined #ceph
[6:23] * VisBits (~VisBits@cpe-174-101-246-167.cinci.res.rr.com) Quit (Quit: ~ Trillian - www.trillian.im ~)
[6:23] * calvinx (~calvin@76.164.201.51) Quit (Read error: No route to host)
[6:24] * calvinx (~calvin@103.7.202.198) has joined #ceph
[6:24] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) has joined #ceph
[6:31] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) has joined #ceph
[6:38] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:43] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:44] * segutier (~segutier@67.142.235.252) Quit (Quit: segutier)
[6:44] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[6:51] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:53] * Concubidated1 (~Adium@66-87-67-47.pools.spcsdns.net) has joined #ceph
[6:55] * mikedawson (~chatzilla@98.227.179.172) Quit (Quit: ChatZilla 0.9.91.1 [Firefox 35.0.1/20150122214805])
[6:58] * overclk (~overclk@121.244.87.117) has joined #ceph
[7:00] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[7:01] * Concubidated (~Adium@71.21.5.251) Quit (Ping timeout: 480 seconds)
[7:02] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[7:03] * vilobhmm_ (~vilobhmm@98.139.248.67) has joined #ceph
[7:03] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:08] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:08] * vilobhmm_ is now known as vilobhmm
[7:09] * kefu (~kefu@114.86.208.30) has joined #ceph
[7:18] * vbellur (~vijay@121.244.87.117) has joined #ceph
[7:21] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[7:25] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[7:29] * calvinx (~calvin@103.7.202.198) Quit (Quit: calvinx)
[7:37] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[7:42] * elder_ (~elder@210.177.145.249) Quit (Quit: Leaving)
[7:47] * cooldharma06 (~chatzilla@218.248.25.100) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 21.0/20130515140136])
[7:47] * destrudo (~destrudo@64.142.74.180) has joined #ceph
[7:50] * davidz (~davidz@2605:e000:1313:8003:9083:5463:ed5a:3994) has joined #ceph
[7:58] * davidz1 (~davidz@2605:e000:1313:8003:a179:a227:e39d:82ec) Quit (Ping timeout: 480 seconds)
[7:58] * Concubidated1 is now known as Concubidated
[8:00] * shang (~ShangWu@175.41.48.77) has joined #ceph
[8:07] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[8:08] * badone (~brad@66.187.239.16) Quit (Ping timeout: 480 seconds)
[8:16] * linjan (~linjan@213.8.240.146) has joined #ceph
[8:17] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[8:21] * alram (~alram@ppp-seco11pa2-46-193-140-198.wb.wifirst.net) has joined #ceph
[8:23] * vilobhmm (~vilobhmm@98.139.248.67) Quit (Ping timeout: 480 seconds)
[8:23] * Concubidated (~Adium@66-87-67-47.pools.spcsdns.net) Quit (Quit: Leaving.)
[8:29] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[8:30] * foosinn (~stefan@ipb21bde9a.dynamic.kabel-deutschland.de) has joined #ceph
[8:40] * Sysadmin88 (~IceChat77@94.12.240.104) Quit (Quit: It's a dud! It's a dud! It's a du...)
[8:40] * cok (~chk@2a02:2350:18:1010:acd1:f39f:6727:75a0) has joined #ceph
[8:47] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[8:49] * yerrysherry (~yerrysher@ns.milieuinfo.be) has joined #ceph
[8:51] * alram (~alram@ppp-seco11pa2-46-193-140-198.wb.wifirst.net) Quit (Ping timeout: 480 seconds)
[8:51] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[8:53] * liiwi_ is now known as liiwi
[8:56] * segutier (~segutier@rrcs-97-79-140-195.sw.biz.rr.com) has joined #ceph
[8:57] * calvinx (~calvin@103.7.202.198) has joined #ceph
[8:58] * aarcane (~aarcane@99-42-64-118.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[9:00] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[9:00] * thb (~me@port-34072.pppoe.wtnet.de) has joined #ceph
[9:01] * segutier (~segutier@rrcs-97-79-140-195.sw.biz.rr.com) Quit ()
[9:01] * segutier (~segutier@rrcs-97-79-140-195.sw.biz.rr.com) has joined #ceph
[9:01] * analbeard (~shw@support.memset.com) has joined #ceph
[9:01] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[9:08] * dgurtner (~dgurtner@178.197.231.20) has joined #ceph
[9:11] * PingKuo (~ping@123.51.160.200) Quit (Ping timeout: 480 seconds)
[9:12] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[9:15] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) Quit (Quit: Coyote finally caught me)
[9:15] * scuttlemonkey (~scuttle@ns1-rdu.redhat.com) Quit (Quit: Coyote finally caught me)
[9:15] * dis_ (~idryomov@nat-pool-rdu-t.redhat.com) Quit (Remote host closed the connection)
[9:17] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[9:17] * bandrus (~brian@54.sub-70-211-78.myvzw.com) Quit (Ping timeout: 480 seconds)
[9:19] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[9:19] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[9:20] * PingKuo (~ping@123.51.160.200) has joined #ceph
[9:21] * segutier (~segutier@rrcs-97-79-140-195.sw.biz.rr.com) Quit (Quit: segutier)
[9:21] <ZyTer> hi
[9:22] <ZyTer> can i remove the default pools : data, metadata, rbd
[9:23] <ZyTer> (on a fresh install, or not..)
[9:25] <absynth> for data and rbd: sure, you can delete them. metadata: not actually sure
[9:25] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[9:27] <ZyTer> absynth: ok thanks ! :)
[9:27] * bandrus (~brian@54.sub-70-211-78.myvzw.com) has joined #ceph
[9:30] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[9:33] * elder_ (~elder@210.177.145.249) has joined #ceph
[9:35] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:42] * fdmanana (~fdmanana@bl13-151-100.dsl.telepac.pt) has joined #ceph
[9:42] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[9:44] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:45] * rwheeler (~rwheeler@nat-pool-tlv-u.redhat.com) has joined #ceph
[9:48] * segutier (~segutier@rrcs-97-79-140-195.sw.biz.rr.com) has joined #ceph
[9:50] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[9:50] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[9:50] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[9:52] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:55] * PingKuo (~ping@123.51.160.200) Quit (Ping timeout: 480 seconds)
[9:57] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[9:58] * fsimonce (~simon@host217-37-dynamic.30-79-r.retail.telecomitalia.it) has joined #ceph
[9:59] * ScOut3R (~ScOut3R@catv-80-98-46-171.catv.broadband.hu) has joined #ceph
[10:01] * jaank (~quassel@98.215.50.223) has joined #ceph
[10:04] * vbellur (~vijay@121.244.87.124) has joined #ceph
[10:06] * foosinn (~stefan@ipb21bde9a.dynamic.kabel-deutschland.de) Quit (Quit: Leaving)
[10:11] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[10:12] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[10:13] * Concubidated1 (~Adium@71.21.5.251) has joined #ceph
[10:13] * Concubidated (~Adium@71.21.5.251) Quit (Read error: Connection reset by peer)
[10:14] * asalor (~asalor@0001ef37.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:14] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[10:20] * Concubidated1 (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[10:21] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[10:21] * BManojlovic (~steki@178-221-74-244.dynamic.isp.telekom.rs) has joined #ceph
[10:24] * jaank (~quassel@98.215.50.223) Quit (Read error: Connection reset by peer)
[10:25] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Quit: Lost terminal)
[10:26] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:28] * capri (~capri@212.218.127.222) has joined #ceph
[10:32] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[10:33] * palmeida (~palmeida@gandalf.wire-consulting.com) has joined #ceph
[10:34] * cok (~chk@2a02:2350:18:1010:acd1:f39f:6727:75a0) Quit (Quit: Leaving.)
[10:37] * bandrus (~brian@54.sub-70-211-78.myvzw.com) Quit (Ping timeout: 480 seconds)
[10:37] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[10:40] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[10:41] * fouxm (~foucault@ks01.commit.ninja) Quit (Quit: ZNC - http://znc.in)
[10:41] * fouxm (~foucault@ks01.commit.ninja) has joined #ceph
[10:46] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[10:48] * bandrus (~brian@54.sub-70-211-78.myvzw.com) has joined #ceph
[10:48] * ssejourne (~ssejourne@2001:41d0:52:300::d16) Quit (Quit: WeeChat 0.4.2)
[10:49] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[10:49] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[10:55] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[10:58] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[11:00] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[11:01] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[11:02] * shyu (~shyu@119.254.196.66) Quit (Remote host closed the connection)
[11:08] * brutusca_ (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[11:08] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[11:12] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[11:15] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Remote host closed the connection)
[11:15] * abdork (~ceph@119.92.91.1) has joined #ceph
[11:16] <abdork> Hi, is it possble to set erasure coding to an existing pool?
[11:16] <abdork> or do I need re-create
[11:18] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[11:24] * brutusca_ (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[11:24] * kefu (~kefu@114.86.208.30) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:24] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[11:25] * ScOut3R_ (~ScOut3R@catv-80-98-46-171.catv.broadband.hu) has joined #ceph
[11:25] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[11:27] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[11:27] * bandrus (~brian@54.sub-70-211-78.myvzw.com) Quit (Ping timeout: 480 seconds)
[11:28] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[11:31] * ScOut3R (~ScOut3R@catv-80-98-46-171.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[11:38] * oro (~oro@2001:620:20:16:e433:3ad3:683:82b4) has joined #ceph
[11:39] <ZyTer> i use ceph with prowmox, and i have 4 iscsi disk -> /dev/sdk, /dev/sdi, /dev/sdj, /dev/sdk , and for ceph: sdb,sdc,sdd,sde,sdf,sdg ( SSD+SATA )
[11:40] <ZyTer> i have 3 server, but on an server the disk /dev/sdk are one local disk, and not the iscsi, and sdg are iscsi
[11:41] <ZyTer> can i change the disk assignation, for put /dev/sdk -> iscsi, and /dev/sdg -> ceph OSD ?
[11:41] * zhaochao (~zhaochao@111.161.77.232) has left #ceph
[11:42] * elder_ (~elder@210.177.145.249) Quit (Quit: Leaving)
[11:45] * kefu (~kefu@114.86.208.30) has joined #ceph
[11:47] * danieagle (~Daniel@201-95-103-54.dsl.telesp.net.br) has joined #ceph
[11:48] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[11:51] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[11:52] * derjohn_mob (~aj@94.119.1.11) has joined #ceph
[11:52] * bkopilov (~bkopilov@bzq-79-179-13-13.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[11:55] * bkopilov (~bkopilov@bzq-79-180-34-44.red.bezeqint.net) has joined #ceph
[11:57] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[11:59] * swami2 (~swami@49.32.0.159) Quit (Quit: Leaving.)
[11:59] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[12:04] * kefu (~kefu@114.86.208.30) Quit (Max SendQ exceeded)
[12:05] * kefu (~kefu@114.86.208.30) has joined #ceph
[12:06] * lucas1 (~Thunderbi@218.76.52.64) Quit (Remote host closed the connection)
[12:08] * bkopilov (~bkopilov@bzq-79-180-34-44.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[12:08] * bkopilov (~bkopilov@109.66.134.48) has joined #ceph
[12:12] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[12:14] * derjohn_mob (~aj@94.119.1.11) Quit (Ping timeout: 480 seconds)
[12:16] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[12:17] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[12:18] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:18] * yerrysherry (~yerrysher@ns.milieuinfo.be) Quit (Ping timeout: 480 seconds)
[12:20] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[12:23] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[12:28] * vbellur (~vijay@121.244.87.117) has joined #ceph
[12:29] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: This computer has gone to sleep)
[12:31] * abdork (~ceph@119.92.91.1) Quit (Quit: abdork)
[12:41] * diegows (~diegows@190.190.5.238) has joined #ceph
[12:42] * ScOut3R (~ScOut3R@catv-80-98-46-171.catv.broadband.hu) has joined #ceph
[12:42] * diegows (~diegows@190.190.5.238) Quit ()
[12:45] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[12:47] * tim|kumina (~tim@82-171-142-11.ip.telfort.nl) has joined #ceph
[12:47] <tim|kumina> hey all, anyone know if progress is being made in making cephfs production ready? i think i've seen the disclaimer for at least the last three years now :S
[12:49] * ScOut3R_ (~ScOut3R@catv-80-98-46-171.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:50] * cok (~chk@nat-cph5-sys.net.one.com) has joined #ceph
[12:52] <darkfader> tim|kumina it's a lot more stable than 3 years ago. that counts as progress!
[12:53] <tim|kumina> darkfader: heh true that :)
[12:53] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[12:53] <tim|kumina> but i was wondering how much work it would be to have it declared production ready and if there's activity towards it
[12:53] <tim|kumina> if there's any schedule for it or something
[12:55] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[12:55] * reed (~reed@net-93-144-229-167.cust.dsl.teletu.it) has joined #ceph
[13:03] <jcsp> tim|kumina: yes, there's plenty of work going on. Feel free to look at the git history, the 'fs' component on tracker.ceph.com, or attend one of the events where we speak about it (e.g. https://www.youtube.com/watch?v=_JK1eFenY6I)
[13:04] <tim|kumina> jcsp: thx for the link! looking at it now and seems to start with exactly my question :)
[13:04] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[13:05] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[13:05] * yanzheng (~zhyan@182.139.205.12) has joined #ceph
[13:09] <flaf> Hi, it's possible to increase the pg_num of a pool but it's impossible to decrease the pg_num of a pool. Is that correct?
[13:10] <jcsp> flaf: correct.
[13:10] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:12] <flaf> jcsp: ok. Thx. So if I have choose a pg_num too big for a pool there is some time, there is no way for me to decrease the pg_num?
[13:12] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[13:12] <jcsp> that's what "impossible" means, yep :-)
[13:14] <flaf> But, some people have already the same problem as me in the real life. How they solved this pb?
[13:15] <jcsp> you'd need to get your data out of your too-many-pgs pool into somewhere else, and then remove that pool
[13:17] * bkopilov (~bkopilov@109.66.134.48) Quit (Ping timeout: 480 seconds)
[13:18] * bkopilov (~bkopilov@bzq-79-177-117-89.red.bezeqint.net) has joined #ceph
[13:19] <flaf> Ok jcsp. I have not thought enough when I create my pool. This is my fault. ;) Thx jcsp.
[13:21] <flaf> It's about some pools of radosgw (.users, .rgw.* etc). pg_num are too big.
[13:22] * dmick (~dmick@2607:f298:a:607:155f:b779:1d27:8614) Quit (Ping timeout: 480 seconds)
[13:28] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[13:30] * branto1 (~branto@213.175.37.10) has joined #ceph
[13:31] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[13:32] * dmick (~dmick@2607:f298:a:607:c937:1916:c8b2:a4ee) has joined #ceph
[13:36] * CephTestC (~CephTestC@199.91.185.156) Quit (Ping timeout: 480 seconds)
[13:36] * CephTestC (~CephTestC@199.91.185.156) has joined #ceph
[13:40] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[13:41] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:48] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[13:49] * bkopilov (~bkopilov@bzq-79-177-117-89.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[13:49] * bkopilov (~bkopilov@109.67.167.181) has joined #ceph
[13:53] * rohanm (~rohanm@mobile-166-173-185-157.mycingular.net) has joined #ceph
[13:54] * James_259 (51bbfe8a@107.161.19.109) has joined #ceph
[13:55] * dis (~dis@109.110.67.201) has joined #ceph
[13:55] <James_259> Hi. I want to add a bunch of osd's to my cluster. Can anyone tell me if it is safe to add them one by one without waiting for the rebalancing to complete, or should I add one, wait for it to complete, then add the next one, wait, etc?
[13:56] <James_259> Obviously by the time I come to add the second one it would already be backfilling to the first. Just want to know if its safe to add more while backfilling is in progress.
[14:00] <jcsp> James_259: should be safe, but might not be the most efficient as some minority of data might get moved twice.
[14:00] <jcsp> you could add all OSDs with the noin flag set, and then remove the flag so that they all get marked 'in' at more or less the same time
[14:01] <jcsp> but that will put a larger bandwidth burden on the OSD population at a single time
[14:02] <James_259> Thank you jscp. I plan to do it out of hours anyway. Wouldn't doing them one at a time and waiting between each one result in more data being moved twice anyway?
[14:06] <jcsp> *rereads*??? yes, indeed. I should have said that the not waiting would be safe, and the waiting would incur the penalty.
[14:06] <jcsp> but in either case you probably want to separate the creation of the OSDs from the data movement by creating them all 'out' and then applying whatever strategy you choose to the 'in'-ness.
[14:07] <James_259> thank you very much.
[14:08] <Vacuum> How about bringing them all in at once, but with a weigh of 0.05. then increase the weight from time to time by 0.05 ?
[14:08] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[14:10] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[14:11] <James_259> I have seen that suggested on some forums to reduce the impact of the rebalancing. However, in my case I intend to do it late night and get the whole thing done as quickly as possible so I assume it would be best for me to just let it go all the way in one go, right? There is not too much data so it should be feasible to complete within 6 hours o
[14:11] <James_259> r so.
[14:12] * rohanm (~rohanm@mobile-166-173-185-157.mycingular.net) Quit (Ping timeout: 480 seconds)
[14:12] * tupper_ (~tcole@rtp-isp-nat-pool1-1.cisco.com) Quit (Remote host closed the connection)
[14:14] <flaf> If I want to copy data from poolA to poolB, is there more suitable that this: rados -p poolA ls list.txt && for o in $(cat list.txt); do rados -p poolA get $o /tmp/$o.data; rados -p poolB put $o /tmp/$o.data; rm /tmp/$o.data; done
[14:14] <flaf> ?
[14:15] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[14:16] <flaf> (Is it possible to copy object without create a dump in the filesystem ?)
[14:16] * cok (~chk@nat-cph5-sys.net.one.com) Quit (Quit: Leaving.)
[14:19] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) has joined #ceph
[14:19] * bandrus (~brian@54.sub-70-211-78.myvzw.com) has joined #ceph
[14:22] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[14:24] * derjohn_mob (~aj@94.119.1.11) has joined #ceph
[14:29] * tupper_ (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[14:32] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[14:34] * sh (~sh@2001:6f8:1337:0:353e:562a:3774:9663) Quit (Quit: Verlassend)
[14:36] * BManojlovic (~steki@178-221-74-244.dynamic.isp.telekom.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[14:38] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:38] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:39] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has left #ceph
[14:39] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[14:45] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[14:46] * BManojlovic (~steki@178-221-74-244.dynamic.isp.telekom.rs) has joined #ceph
[14:49] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) has joined #ceph
[14:52] * rohanm (~rohanm@mobile-166-173-185-157.mycingular.net) has joined #ceph
[14:54] <James_259> Hi flaf. I don't know anything about using the rados tool but what you are doing sounds like something very well suited to using a pipe. This depends in rados supports input and output via pipes though. I see in the docs the rados ls command can output to stdout (which can easily be piped) by putting - for the outfile. If this works for get an
[14:54] <James_259> d put then you might be able to do something like this:-
[14:54] <James_259> rados -p poolA get $o - | rados -p poolB put $o -
[14:54] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[14:54] <James_259> I do not know if that is possible though. maybe someone with more knowledge of the rados tool can confirm.
[14:58] <James_259> failing that and depending how big your objects are and how much ram you have - you could setup a ram disk instead of using local disk to speed things up.
[15:00] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[15:00] * BManojlovic (~steki@178-221-74-244.dynamic.isp.telekom.rs) Quit (Ping timeout: 480 seconds)
[15:00] * yanzheng (~zhyan@182.139.205.12) Quit (Quit: ??????)
[15:02] * sjm (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) has joined #ceph
[15:05] * marrusl (~mark@cpe-24-90-46-248.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:06] * tupper_ (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[15:07] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[15:08] * jrocha (~jrocha@vagabond.cern.ch) has joined #ceph
[15:08] * dyasny (~dyasny@173.231.115.58) Quit ()
[15:08] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[15:08] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[15:08] * shaunm (~shaunm@74.215.76.114) Quit (Quit: Ex-Chat)
[15:09] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[15:11] <tim|kumina> can a rados block device be attached to multiple servers? so you can do clustered fs with something like ocfs2?
[15:13] <flaf> Oh James_259, thx. You're right. ;) This works very well with Firefly --> for o in $(rados -p poolA ls -); do rados -p poolA get $o - | rados -p poolB put $o -; done
[15:14] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[15:14] <James_259> Great. :)
[15:14] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[15:14] <flaf> I don't thought that "rados put" could read an object from stdin. Thx.
[15:15] <James_259> tim|kumina - I believe you can do that. I see it mentioned in this discussion: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-May/031427.html
[15:15] * kefu (~kefu@114.86.208.30) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:15] <tim|kumina> cool, finally something to replace drbd with
[15:16] <James_259> flaf, I didn't know that either. I just saw it was available for the ls command so thought it was worth a try. great to know it works.
[15:16] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:16] * tupper_ (~tcole@rtp-isp-nat-pool1-1.cisco.com) has joined #ceph
[15:16] <flaf> tim|kumina: yes you can attach rbd image on several nodes, but after you must use a filesystem cluster aware.
[15:17] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[15:17] <tim|kumina> flaf: yes, was looking to replace our drbd+ocfs2 stuff... since cephfs isn't there yet, just being able to drop drbd would be nice as well
[15:18] <flaf> James_259: I saw it for ls but I have not dared to think that it could be extended to put and get. :)
[15:18] * bandrus (~brian@54.sub-70-211-78.myvzw.com) Quit (Quit: Leaving.)
[15:19] <flaf> tim|kumina: yes. I'm not very aware about cephfs but I know that some people use it in production and it works well.
[15:20] <flaf> tim|kumina: there is another little problem with cephfs currently: you can have juste one cephfs per cluster.
[15:20] <flaf> while you can have several rbd images per cluster.
[15:20] <tim|kumina> yea, just heard that from the John Spray presentation :)
[15:21] * sig_wall (~adjkru@xn--hwgz2tba.lamo.su) has joined #ceph
[15:21] <flaf> For instance, I have read here that some guys use cephfs for "web filesystem" without any pb.
[15:22] <flaf> And sometimes I have read, "if you use cephfs, check your backups" :)
[15:24] * nitti (~nitti@162.222.47.218) has joined #ceph
[15:27] * lalatenduM (~lalatendu@86.59.2.154) has joined #ceph
[15:33] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[15:37] <James_259> I had heard it had performance issues but nothing about losing data. checking your backups is always good advice though. :P
[15:37] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[15:38] <James_259> not used it personally though other than a quick play for a couple of hours
[15:38] * cok (~chk@2a02:2350:18:1010:f4c5:f71:88a5:41f9) has joined #ceph
[15:39] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[15:41] * BManojlovic (~steki@178-221-74-244.dynamic.isp.telekom.rs) has joined #ceph
[15:42] * burley (~khemicals@cpe-98-28-233-158.woh.res.rr.com) has joined #ceph
[15:44] * derjohn_mob (~aj@94.119.1.11) Quit (Ping timeout: 480 seconds)
[15:45] <burley> So we're seeing page allocation failures for processes on our ceph OSD nodes (though not for Ceph processes, usually for the swapper process) periodically. We've tuned vm.min_free_kbytes (2097152), vm.zone_reclaim_mode (1) and vm.vfs_cache_pressure (1000) -- which reduced the frequency of page allocation failures, but they are still happening after which performance tanks on that OSD node, any ideas?
[15:46] <burley> also, anyone know if running numad on a ceph osd node is safe, or does it impact performance in a bad way?
[15:46] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[15:46] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[15:47] <tim|kumina> does hw raid make sense for an OSD, btw?
[15:48] <James_259> I dont think so. maybe BBWC would be a bonus but other than that, ceph is implementing a kind of raid anyway
[15:51] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[15:51] <James_259> I wondered if raid0 might help with throughput but the biggest issue I have with ceph cluster performance is the disks thrashing (seeking constantly) so while raid0 might improve sequential read.. I think having independent disks/osd's might help distrubute random access load more and stop the disks reaching the point where they are trashing as qu
[15:51] <James_259> ickly.
[15:51] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[15:52] <burley> James_259: Are all disks thrashing at the same time or just a few?
[15:53] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[15:55] <James_259> it was a problem when we only had 3-4 osd's. most if not all disks were thrashing and the cluster was very slow. However, adding more osd's to distribute the load made things much better. as soon as you get to the point that the disks are not backlogged, performance came right up.
[15:55] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[15:56] <burley> if the thrashing was out of balance, could have been an undersized pg_num/pgp_num
[15:57] <James_259> I think it is generally suggested that you should aim for a minimum of 10 osd's for good performance. We now have 5 (across 5 hosts) osd's and are about to double up to 10.
[15:57] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[15:57] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Quit: Exeunt dneary)
[15:58] <James_259> Ahh - you might be right there. There was originally a low pg count but when I added the extra 2 osd's, ceph complained about it so I brought it up to 1024
[15:58] <James_259> I only brought it up because ceph complained about it. Never thought it might have contributed to the performance improvement. Thanks for the tip.
[15:58] * branto (~branto@213.175.37.10) has joined #ceph
[16:00] * bkopilov (~bkopilov@109.67.167.181) Quit (Ping timeout: 480 seconds)
[16:02] <burley> another sign would be an imbalance in disk usage between the OSDs
[16:02] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[16:02] * branto1 (~branto@213.175.37.10) Quit (Ping timeout: 480 seconds)
[16:04] <James_259> ahh, we definitely saw that.
[16:04] <James_259> weighting was done based on disk size but one osd hit near_full (85%) yet another was at 60%.
[16:05] <James_259> seems more balanced now tho
[16:05] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[16:06] * rturk|afk (~rturk@nat-pool-rdu-t.redhat.com) has joined #ceph
[16:07] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[16:13] * go_habs_go (~sdfsdf@132.217.254.174) has joined #ceph
[16:13] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[16:15] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[16:15] * saltlake (~saltlake@12.250.199.170) has joined #ceph
[16:15] <go_habs_go> I need some help to make Temporary Url Work with radosgw + swift with giant I always got ??Authorization failed??
[16:15] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[16:18] * kawa2014 (~kawa@net-93-147-178-140.cust.dsl.teletu.it) has joined #ceph
[16:19] * segutier (~segutier@rrcs-97-79-140-195.sw.biz.rr.com) Quit (Quit: segutier)
[16:20] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:22] * joef (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) has joined #ceph
[16:23] * analbeard (~shw@support.memset.com) Quit (Remote host closed the connection)
[16:25] * joef1 (~Adium@2620:79:0:2420::b) has joined #ceph
[16:25] * analbeard (~shw@support.memset.com) has joined #ceph
[16:25] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[16:25] * scuttle|afk is now known as scuttlemonkey
[16:28] <go_habs_go> So there more information about my Temporary Url http://pastebin.com/TFaRDKwm
[16:30] * joef (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:40] * togdon (~togdon@74.121.28.6) has joined #ceph
[16:41] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[16:42] * jcsp (~jcsp@0001bf3a.user.oftc.net) has joined #ceph
[16:43] * cok (~chk@2a02:2350:18:1010:f4c5:f71:88a5:41f9) Quit (Quit: Leaving.)
[16:45] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[16:45] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[16:46] * dgurtner (~dgurtner@178.197.231.20) Quit (Ping timeout: 480 seconds)
[16:46] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[16:48] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[16:48] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit ()
[16:48] * ChanServ sets mode +o scuttlemonkey
[16:49] * scuttlemonkey changes topic to 'http://ceph.com/get || dev channel #ceph-devel || test lab channel #sepia || Seeking Ceph Day speakers, contact scuttlemonkey'
[16:49] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[16:51] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[16:51] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[16:53] * bkopilov (~bkopilov@bzq-109-67-167-181.red.bezeqint.net) has joined #ceph
[16:54] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[16:54] * saltlake (~saltlake@12.250.199.170) Quit (Quit: Nettalk6 - www.ntalk.de)
[16:55] * fattaneh (~fattaneh@31.59.61.179) has joined #ceph
[16:55] * fattaneh (~fattaneh@31.59.61.179) Quit (Max SendQ exceeded)
[16:57] * fattaneh (~fattaneh@31.59.61.179) has joined #ceph
[17:00] * ircolle (~Adium@2601:1:a580:145a:d48b:2093:624:ab2a) has joined #ceph
[17:03] * qybl (~foo@maedhros.krzbff.de) Quit (Quit: WeeChat 1.0.1)
[17:04] * Mentalow (~textual@193.52.208.233) has joined #ceph
[17:04] <Mentalow> Hey there. Nice glitch =D recovery -20/15 objects degraded (-133.333%);
[17:04] <Mentalow> <3
[17:05] * qybl (~foo@maedhros.krzbff.de) has joined #ceph
[17:06] <jcsp> Mentalow: please leave details of how you got into that state on http://tracker.ceph.com/issues/5884
[17:06] * lalatenduM (~lalatendu@86.59.2.154) Quit (Read error: Connection reset by peer)
[17:06] * marrusl (~mark@cpe-24-90-46-248.nyc.res.rr.com) has joined #ceph
[17:06] <Mentalow> =D thanks
[17:07] <Mentalow> I don't even really know haha
[17:08] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[17:09] * lalatenduM (~lalatendu@86.59.2.154) has joined #ceph
[17:09] * linjan (~linjan@80.178.220.195.adsl.012.net.il) has joined #ceph
[17:10] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[17:11] * fattaneh (~fattaneh@31.59.61.179) has left #ceph
[17:15] <Mentalow> jcsp Any command output you recommend me to send ? I already printed ceph --version and ceph -s
[17:15] * saltlake (~saltlake@12.250.199.170) has joined #ceph
[17:17] * BManojlovic (~steki@178-221-74-244.dynamic.isp.telekom.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:19] * asalor (~asalor@2a00:1028:96c1:4f6a:204:e2ff:fea1:64e6) has joined #ceph
[17:19] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[17:20] * kanagaraj (~kanagaraj@116.75.74.0) has joined #ceph
[17:24] * calvinx (~calvin@103.7.202.198) Quit (Quit: calvinx)
[17:24] <saltlake> joshd: Are you available ?
[17:26] <pmxceph> Mentalow: Your OSDs are near full, that would be one problem
[17:26] <Mentalow> Full ? Really ? Oh okay
[17:27] <Mentalow> I have no object/data in the OSD x)
[17:27] <Mentalow> I am not "John Spray"
[17:27] <Mentalow> 340G free on 350G
[17:27] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[17:30] * jcsp (~jcsp@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:32] * James_259 (51bbfe8a@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[17:33] * saltlake2 (~saltlake@12.250.199.170) has joined #ceph
[17:35] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:35] * Mentalow (~textual@193.52.208.233) Quit (Ping timeout: 480 seconds)
[17:36] * joef1 (~Adium@2620:79:0:2420::b) has left #ceph
[17:37] * saltlake (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[17:37] * lalatenduM (~lalatendu@86.59.2.154) Quit (Quit: Leaving)
[17:38] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:38] * ohnomrbill (~ohnomrbil@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[17:39] <burley> Any thoughts on page allocation failures on OSD nodes running CentOS 7? I am finding other threads with impacted folks in similar configurations but hoping that maybe someone here knows a good solution for them.
[17:39] <pmxceph> Mentalow: I think i misunderstood the link you posted. I thought the ceph tracker had your Ceph output. but i can see that its somebody elses output over a year ago. You said you already printed ceph output. I am not seeing where
[17:40] * linjan (~linjan@80.178.220.195.adsl.012.net.il) Quit (Ping timeout: 480 seconds)
[17:41] <pmxceph> burley: Are you using Mellanox HCAs on your CentOS ceph?
[17:41] * segutier (~segutier@216-166-19-146.fwd.datafoundry.com) has joined #ceph
[17:43] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[17:43] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[17:47] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:47] * dmsimard_away is now known as dmsimard
[17:48] * oro (~oro@2001:620:20:16:e433:3ad3:683:82b4) Quit (Ping timeout: 480 seconds)
[17:49] * linjan (~linjan@176.195.210.7) has joined #ceph
[17:50] <georgem> is it better to have separated 10 Gbps NICs for ceph public traffic and replication traffic, or should I bond them and share the links between the two types of traffic?
[17:50] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[17:51] <burley> pmxceph: We're using 10Gb Broadcom ethernet
[17:53] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[17:55] * sjusthm (~sam@96-39-232-68.dhcp.mtpk.ca.charter.com) has joined #ceph
[17:55] * sjusthm (~sam@96-39-232-68.dhcp.mtpk.ca.charter.com) has left #ceph
[17:55] <darkfader> burley: i hadn't heard about it but could it be a hugepages issue? did you try turning them off (i think they would be on by default there)
[17:58] * rwheeler (~rwheeler@nat-pool-tlv-u.redhat.com) Quit (Remote host closed the connection)
[17:59] <burley> darkfader: I don't think so, since the PAF order was 2
[18:00] <darkfader> ok
[18:00] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[18:00] * jcsp (~jcsp@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[18:01] * sudocat (~davidi@192.185.1.20) has joined #ceph
[18:02] * nitti (~nitti@162.222.47.218) Quit (Quit: Leaving...)
[18:03] * nitti (~nitti@162.222.47.218) has joined #ceph
[18:05] * vbellur (~vijay@122.178.243.251) has joined #ceph
[18:05] * James_259 (51bbfe8a@107.161.19.109) has joined #ceph
[18:05] * joshd1 (~jdurgin@24-205-54-236.dhcp.gldl.ca.charter.com) Quit (Quit: Leaving.)
[18:07] * jcsp1 (~Adium@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[18:09] * ScOut3R (~ScOut3R@catv-80-98-46-171.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[18:11] * rmoe (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[18:13] * off_rhoden (~off_rhode@209.132.181.86) Quit (Ping timeout: 480 seconds)
[18:13] * gregsfortytwo (~gregsfort@209.132.181.86) Quit (Ping timeout: 480 seconds)
[18:19] <pmxceph> burley: This issue is somewhat frequent on CentOS ceph specially with Infiniband. One way to address this without making any configuration changes is to ensure there are plenty of Free memory available. The error occurs when Kernel runs out of memory for some reason. You can also try to make this change in sysctl.d > vm/min_free_kbytes = 524288. May help the situation
[18:22] * Mentalow (~textual@rc137-h03-89-87-200-17.dsl.sta.abo.bbox.fr) has joined #ceph
[18:22] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:24] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:24] * rohanm (~rohanm@mobile-166-173-185-157.mycingular.net) Quit (Ping timeout: 480 seconds)
[18:24] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[18:24] * kawa2014 (~kawa@net-93-147-178-140.cust.dsl.teletu.it) Quit (Quit: Leaving)
[18:25] <burley> pmxceph: That helps elongate the time between incidents, but doesn't solve it
[18:27] <pmxceph> burley: true. I dont think there is a definite solution to this issue, except one of my acquaintance informed me recently that the very latest Kernel in centos may have solve this issue as he did not have the issue any more, 'yet'
[18:28] <pmxceph> burley: I primarily use Debian and Ubuntu for all ceph cluster i manage. So cannot speak from my experience with centos
[18:29] <burley> I considered upgrading the kernel but didn't see anything in the changelogs about it
[18:30] * branto (~branto@213.175.37.10) has left #ceph
[18:32] * saltlake (~saltlake@12.250.199.170) has joined #ceph
[18:34] * saltlake2 (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[18:37] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[18:38] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[18:38] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:40] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[18:43] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) has joined #ceph
[18:45] * s3an2 (~sean@korn.s3an.me.uk) has joined #ceph
[18:46] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[18:50] * kanagaraj (~kanagaraj@116.75.74.0) Quit (Quit: Leaving)
[18:50] * kanagaraj (~kanagaraj@116.75.74.0) has joined #ceph
[18:51] <seapasulli> anyone running EC that has seen osds abort and restart constantly?
[18:51] <seapasulli> saw this bug (http://tracker.ceph.com/issues/7506) but it looks like it was patched in .77 and I'm running .87
[18:52] * sputnik13 (~sputnik13@74.202.214.170) has joined #ceph
[18:55] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[18:56] * ScOut3R (~ScOut3R@4E5CC061.dsl.pool.telekom.hu) has joined #ceph
[18:56] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit (Quit: Leaving.)
[18:56] <seapasulli> http://paste.ubuntu.com/10176075/
[18:57] <seapasulli> looks like the same issue though :-(
[19:00] * mattrich (~Adium@38.108.161.130) has joined #ceph
[19:01] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[19:04] * jcsp (~jcsp@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[19:04] <mattrich> Is there any way to determine at what offset an append to an object happened at the librados level? I'd like to add a stat() to a rados_write_op with an append(), but that does not seem to work (I think because Objecter::prepare_mutate_op() doesn't do anything with the output bufferlist).
[19:04] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[19:05] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:07] * raso1 (~raso@deb-multimedia.org) has joined #ceph
[19:11] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[19:12] * raso1 (~raso@deb-multimedia.org) Quit (Read error: Connection reset by peer)
[19:12] * raso (~raso@deb-multimedia.org) Quit (Ping timeout: 480 seconds)
[19:12] <ohnomrbill> mattrich: You might have better luck asking in #ceph-devel
[19:13] <mattrich> ohnomrbill: thanks, will do!
[19:14] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[19:14] * ScOut3R (~ScOut3R@4E5CC061.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[19:14] * James_259 (51bbfe8a@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[19:19] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:283f:f3f5:7d01:fc1d) Quit (Remote host closed the connection)
[19:20] * madkiss (~madkiss@2001:6f8:12c3:f00f:7c54:62b5:a7f6:8158) has joined #ceph
[19:22] * raso (~raso@deb-multimedia.org) has joined #ceph
[19:23] * cholcombe973 (~chris@7208-76ef-ff1f-ed2f-329a-f002-3420-2062.6rd.ip6.sonic.net) has joined #ceph
[19:25] * BManojlovic (~steki@cable-89-216-229-100.dynamic.sbb.rs) has joined #ceph
[19:27] * linuxkidd (~linuxkidd@113.sub-70-210-192.myvzw.com) Quit (Ping timeout: 480 seconds)
[19:30] * raso (~raso@deb-multimedia.org) Quit (Read error: Connection reset by peer)
[19:32] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[19:33] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit (Quit: Leaving.)
[19:35] * alram (~alram@ppp-seco11pa2-46-193-140-198.wb.wifirst.net) has joined #ceph
[19:35] * seapasulli (~seapasull@95.85.33.150) Quit (Quit: leaving)
[19:36] * linuxkidd (~linuxkidd@192.sub-70-210-231.myvzw.com) has joined #ceph
[19:40] * raso (~raso@deb-multimedia.org) has joined #ceph
[19:45] * seapasulli (~seapasull@95.85.33.150) has joined #ceph
[19:59] * vbellur (~vijay@122.178.243.251) Quit (Ping timeout: 480 seconds)
[19:59] * brutuscat (~brutuscat@137.Red-83-42-88.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[20:00] * Syne (~ellisgeek@205.213.134.253) has joined #ceph
[20:02] * mattrich1 (~Adium@38.108.161.130) has joined #ceph
[20:02] * mattrich1 (~Adium@38.108.161.130) has left #ceph
[20:03] * bandrus (~brian@54.sub-70-211-78.myvzw.com) has joined #ceph
[20:05] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (Read error: Connection reset by peer)
[20:06] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[20:07] * mattrich (~Adium@38.108.161.130) Quit (Ping timeout: 480 seconds)
[20:07] * raso (~raso@deb-multimedia.org) Quit (Read error: Connection reset by peer)
[20:12] * raso (~raso@deb-multimedia.org) has joined #ceph
[20:14] * ljou (~chatzilla@c-50-184-100-25.hsd1.ca.comcast.net) has joined #ceph
[20:15] <ljou> Is there any "consistency group" support planned for volumes ?
[20:18] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[20:21] * raso (~raso@deb-multimedia.org) Quit (Read error: Connection reset by peer)
[20:22] * kanagaraj (~kanagaraj@116.75.74.0) Quit (Quit: Leaving)
[20:25] * raso (~raso@deb-multimedia.org) has joined #ceph
[20:25] * bandrus (~brian@54.sub-70-211-78.myvzw.com) Quit (Quit: Leaving.)
[20:26] <CephTestC> Does anyone have a good website with an example of all the steps to add a cache tier?
[20:26] * puffy (~puffy@216.207.42.129) has joined #ceph
[20:26] * fattaneh (~fattaneh@31.59.61.179) has joined #ceph
[20:26] * fattaneh (~fattaneh@31.59.61.179) Quit (Max SendQ exceeded)
[20:27] * fattaneh (~fattaneh@31.59.61.179) has joined #ceph
[20:30] * macjack (~Thunderbi@123.51.160.200) Quit (Remote host closed the connection)
[20:37] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:45] <seapasulli> CephTestC: I believe sebastien han does
[20:46] <seapasulli> I can't find the link right now for some reason though :-(
[20:49] * diegows (~diegows@host112.190-136-175.telecom.net.ar) has joined #ceph
[20:50] * diegows (~diegows@host112.190-136-175.telecom.net.ar) Quit ()
[20:55] * raso (~raso@deb-multimedia.org) Quit (Read error: Connection reset by peer)
[20:59] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) has joined #ceph
[21:01] * Mentalow (~textual@rc137-h03-89-87-200-17.dsl.sta.abo.bbox.fr) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[21:03] * fattaneh (~fattaneh@31.59.61.179) has left #ceph
[21:05] * raso (~raso@deb-multimedia.org) has joined #ceph
[21:05] * alram (~alram@ppp-seco11pa2-46-193-140-198.wb.wifirst.net) Quit (Quit: leaving)
[21:09] <CephTestC> seapasulli: Thanks I'll keep an eye out!
[21:10] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:14] * togdon (~togdon@74.121.28.6) has joined #ceph
[21:14] * togdon (~togdon@74.121.28.6) Quit ()
[21:15] * raso (~raso@deb-multimedia.org) Quit (Read error: Connection reset by peer)
[21:16] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) Quit (Quit: Ex-Chat)
[21:19] * joef (~Adium@2601:9:280:f2e:dd03:5a80:af98:38b5) has joined #ceph
[21:25] * arbrandes (~arbrandes@189.110.13.102) has joined #ceph
[21:27] * joef (~Adium@2601:9:280:f2e:dd03:5a80:af98:38b5) has left #ceph
[21:28] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[21:29] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[21:32] <saltlake> joshd: ping ?
[21:33] <joshd> saltlake: hey
[21:34] <saltlake> joshd: good to see you today!! I have not had a chance to read the code yet and hell broke lose at the office so I was on and off in front my laptop!!
[21:34] <saltlake> joshd: wanted to check if there was something you saw (I intend to see the code tonite)
[21:34] * raso (~raso@deb-multimedia.org) has joined #ceph
[21:35] <joshd> saltlake: no, there's no hint as to why it would hang where it did in the log
[21:36] <saltlake> joshd: the log was massive I could not paste all of it pastebin successfully. I will attempt it again tonite and look at it .. hopefully u will be around on this chan tomorrow to discuss
[21:36] <saltlake> joshd: thanks .. alot!!
[21:36] <joshd> saltlake: ah ok, maybe try fpaste.org or another site
[21:37] <joshd> you can upload it with ceph-post-file if it's still too big
[21:38] <saltlake> joshd: ok ... Will attempt to read the code though :-) and see if I can figure it out..
[21:39] <joshd> cool, good luck
[21:39] * puffy (~puffy@216.207.42.129) Quit (Ping timeout: 480 seconds)
[21:40] <joshd> may also want to add --debug-rados 20 and check out librados/snap_set_diff.cc
[21:41] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[21:49] * badone (~brad@203-121-198-226.e-wire.net.au) has joined #ceph
[21:49] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[21:50] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[21:55] * lcurtis (~lcurtis@47.19.105.250) Quit (Ping timeout: 480 seconds)
[21:56] * elder__ (~elder@210.177.145.249) has joined #ceph
[22:01] * andreask (~andreask@h081217069051.dyn.cm.kabsi.at) has joined #ceph
[22:01] * ChanServ sets mode +v andreask
[22:03] * jrankin (~jrankin@d53-64-170-236.nap.wideopenwest.com) Quit (Quit: Leaving)
[22:06] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[22:11] * linjan (~linjan@176.195.210.7) Quit (Ping timeout: 480 seconds)
[22:29] * puffy (~puffy@216.207.42.129) has joined #ceph
[22:29] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit (Quit: Leaving.)
[22:33] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[22:44] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[22:47] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[22:51] <CephTestC> Hi Team is it possible to set the cache tier on the the same SSD's you have your journals on?
[22:51] * saltlake (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[22:51] <gleam> sure, as long as you run the OSD on a different partition on the ssd
[22:51] <gleam> e.g. 6 journal partitions and one osd partition
[22:54] <georgem> Hi, is it better to have separated 10 Gbps NICs for ceph public traffic and replication traffic, or should I bond them and share the links between the two types of traffic?
[22:54] * Syne (~ellisgeek@205.213.134.253) Quit (Ping timeout: 480 seconds)
[22:55] * marrusl (~mark@cpe-24-90-46-248.nyc.res.rr.com) Quit (Remote host closed the connection)
[22:55] * togdon (~togdon@74.121.28.6) has joined #ceph
[22:55] <georgem> I think that if I bond them and replication traffic needs more bandwidth can take from the second link and vice versa
[22:59] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[23:02] * ircolle is now known as ircolle-afk
[23:03] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[23:03] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) has joined #ceph
[23:07] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) Quit ()
[23:07] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) has joined #ceph
[23:15] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[23:24] * badone (~brad@203-121-198-226.e-wire.net.au) Quit (Ping timeout: 480 seconds)
[23:28] * mykola (~Mikolaj@91.225.200.48) Quit (Quit: away)
[23:33] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:34] * elder__ (~elder@210.177.145.249) Quit (Ping timeout: 480 seconds)
[23:36] * karis (~karis@78-106-206.adsl.cyta.gr) has joined #ceph
[23:44] * andreask (~andreask@h081217069051.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[23:53] * badone (~brad@66.187.239.16) has joined #ceph
[23:55] * bandrus (~brian@54.sub-70-211-78.myvzw.com) has joined #ceph
[23:59] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[23:59] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) Quit (Quit: jdillaman)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.