#ceph IRC Log

Index

IRC Log for 2016-09-23

Timestamps are in GMT/BST.

[0:01] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[0:05] * PaulCuzner (~paul@115-188-37-89.jetstream.xtra.co.nz) Quit (Quit: PaulCuzner)
[0:06] * xinli (~charleyst@32.97.110.52) Quit (Ping timeout: 480 seconds)
[0:11] * sphinxx (~sphinxx@154.118.121.108) Quit (Ping timeout: 480 seconds)
[0:11] * thansen (~thansen@17.253.sfcn.org) Quit (Quit: Ex-Chat)
[0:14] * Terry (~Terry@c-98-226-9-210.hsd1.il.comcast.net) has joined #ceph
[0:17] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:17] * mnc (~mnc@c-50-137-214-131.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[0:22] * Freddy (~sixofour@178-175-128-50.static.host) has joined #ceph
[0:24] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[0:25] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[0:27] * Kingrat (~shiny@2605:6000:1526:4063:3509:8cd9:fa1d:1f82) Quit (Remote host closed the connection)
[0:33] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[0:33] * ntpttr_ (~ntpttr@192.55.55.39) Quit (Remote host closed the connection)
[0:34] * ntpttr (~ntpttr@134.134.137.73) has joined #ceph
[0:41] * topro (~prousa@p578af414.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[0:41] <Terry> hello, question about multi-site replication in the Jewel release. Do you have to use the RadosGW API via S3 or Swift to make use of multi-site replication or can you also do multi-site replication if you use librados directly?
[0:44] * vbellur (~vijay@71.234.224.255) has joined #ceph
[0:51] * thansen (~thansen@17.253.sfcn.org) has joined #ceph
[0:52] * Freddy (~sixofour@5AEAABTVO.tor-irc.dnsbl.oftc.net) Quit ()
[0:56] * vata (~vata@96.127.202.136) has joined #ceph
[1:00] * cathode (~cathode@50-232-215-114-static.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[1:03] * kuku (~kuku@119.93.91.136) has joined #ceph
[1:06] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[1:06] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[1:08] * borei (~dan@216.13.217.230) Quit (Ping timeout: 480 seconds)
[1:11] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:15] * kiwnix (~egarcia@azrsrv.egarcia.info) has joined #ceph
[1:16] * kiwnix (~egarcia@00011f91.user.oftc.net) Quit ()
[1:17] * togdon (~togdon@74.121.28.6) Quit (Quit: Bye-Bye.)
[1:20] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[1:23] * kiwnix (~egarcia@00011f91.user.oftc.net) has joined #ceph
[1:29] * thansen (~thansen@17.253.sfcn.org) Quit (Quit: Ex-Chat)
[1:31] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[1:32] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[1:32] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[1:32] * [0x4A6F]_ (~ident@p4FC27B6E.dip0.t-ipconnect.de) has joined #ceph
[1:33] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:33] * [0x4A6F]_ is now known as [0x4A6F]
[1:39] * oms101 (~oms101@p20030057EA015700C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:40] * kuku (~kuku@119.93.91.136) has joined #ceph
[1:47] * oms101 (~oms101@p20030057EA01E900C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:48] * ntpttr (~ntpttr@134.134.137.73) Quit (Remote host closed the connection)
[1:53] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[1:57] * davidzlap (~Adium@2605:e000:1313:8003:b12c:95c8:8e26:3994) Quit (Quit: Leaving.)
[1:59] * davidzlap (~Adium@2605:e000:1313:8003:b12c:95c8:8e26:3994) has joined #ceph
[2:01] * linuxkidd (~linuxkidd@ip70-189-202-62.lv.lv.cox.net) Quit (Quit: Leaving)
[2:03] * Jeffrey4l_ (~Jeffrey@110.244.236.101) has joined #ceph
[2:16] * salwasser (~Adium@2601:197:101:5cc1:6582:91a:2a44:3a15) has joined #ceph
[2:21] * lixiaoy1 (~lixiaoy1@shzdmzpr01-ext.sh.intel.com) has joined #ceph
[2:21] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[2:25] * xarses (~xarses@64.124.158.3) Quit (Ping timeout: 480 seconds)
[2:28] * wushudoin (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[2:32] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[2:34] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[2:42] <stupidnic> Okay. I am having an issue with ceph-deploy, and I think the issue might actually be in ceph-disk. When I try to deploy an OSD with a journal to Ubuntu 16.04 I am seeing a traceback in the activate-journal command.
[2:42] * kefu (~kefu@114.92.125.128) has joined #ceph
[2:45] <stupidnic> Specifically it looks like the command ceph-osd --get-device-fsid isn't returning the correct value which causes ceph-disk to detect the wrong UUID for the journal
[2:45] <stupidnic> It even states that in the stderr out of the command
[2:45] <stupidnic> "SG_IO: questionable sense data, results may be incorrect"
[2:46] * davidzlap (~Adium@2605:e000:1313:8003:b12c:95c8:8e26:3994) Quit (Ping timeout: 480 seconds)
[2:51] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[2:56] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[2:57] * sugoruyo (~textual@host81-151-153-166.range81-151.btcentralplus.com) has joined #ceph
[2:57] * Kingrat (~shiny@2605:6000:1526:4063:d16a:5007:d070:aec1) has joined #ceph
[2:58] * salwasser (~Adium@2601:197:101:5cc1:6582:91a:2a44:3a15) Quit (Quit: Leaving.)
[3:01] * sugoruyo (~textual@host81-151-153-166.range81-151.btcentralplus.com) Quit ()
[3:01] * salwasser (~Adium@2601:197:101:5cc1:874:434e:e363:fce) has joined #ceph
[3:04] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[3:06] * kristen (~kristen@jfdmzpr01-ext.jf.intel.com) Quit (Quit: Leaving)
[3:06] * kiwnix (~egarcia@00011f91.user.oftc.net) Quit (Quit: -)
[3:07] * jfaj_ (~jan@p4FC5BA33.dip0.t-ipconnect.de) has joined #ceph
[3:14] * jfaj (~jan@p4FD26EBD.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:16] * aj__ (~aj@x4db2ab28.dyn.telefonica.de) has joined #ceph
[3:22] <ronrib> has anyone played with zrlio crail? brand new distributed storage system that ehem requires java
[3:23] * derjohn_mobi (~aj@x4db11590.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[3:24] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Quit: Leaving.)
[3:30] * salwasser (~Adium@2601:197:101:5cc1:874:434e:e363:fce) Quit (Quit: Leaving.)
[3:31] * atheism (~atheism@182.48.117.114) has joined #ceph
[3:32] * yanzheng1 (~zhyan@118.116.115.45) has joined #ceph
[3:42] <rkeene> ronrib, Like XtreemFS ?
[3:49] * blizzow (~jburns@c-50-152-51-96.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[3:50] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:57] <ronrib> apparently, though I like how XtreemFS doesn't specifically say fast
[4:07] * davidzlap (~Adium@2605:e000:1313:8003:8c34:9b87:1a7c:4f32) has joined #ceph
[4:32] <rkeene> It's EXTREME !
[4:37] * percevalbot (~supybot@pct-empresas-83.uc3m.es) Quit (Remote host closed the connection)
[4:38] * raphaelsc (~raphaelsc@179.187.140.226) has joined #ceph
[4:41] * percevalbot (~supybot@pct-empresas-83.uc3m.es) has joined #ceph
[4:50] * squizzi (~squizzi@107.13.237.240) Quit (Quit: bye)
[4:56] * davidzlap (~Adium@2605:e000:1313:8003:8c34:9b87:1a7c:4f32) Quit (Quit: Leaving.)
[5:01] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[5:02] * diver (~diver@cpe-2606-A000-111B-C12B-948D-2866-781-ECF8.dyn6.twc.com) has joined #ceph
[5:05] * m8x (~user@182.150.27.112) has joined #ceph
[5:09] * diver (~diver@cpe-2606-A000-111B-C12B-948D-2866-781-ECF8.dyn6.twc.com) Quit (Remote host closed the connection)
[5:09] * diver (~diver@cpe-2606-A000-111B-C12B-948D-2866-781-ECF8.dyn6.twc.com) has joined #ceph
[5:12] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[5:14] * Vacuum__ (~Vacuum@88.130.200.30) has joined #ceph
[5:15] * atheism (~atheism@182.48.117.114) Quit (Quit: leaving)
[5:17] * atheism (~atheism@182.48.117.114) has joined #ceph
[5:17] * diver (~diver@cpe-2606-A000-111B-C12B-948D-2866-781-ECF8.dyn6.twc.com) Quit (Ping timeout: 480 seconds)
[5:20] * Vacuum_ (~Vacuum@88.130.208.123) Quit (Ping timeout: 480 seconds)
[5:23] * adamcrume_ (~quassel@2601:647:cb01:f890:113e:5305:20b4:b646) has joined #ceph
[5:37] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[5:48] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[5:56] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[5:59] * atheism (~atheism@182.48.117.114) Quit (Remote host closed the connection)
[5:59] * atheism (~atheism@182.48.117.114) has joined #ceph
[6:07] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[6:11] * walcubi (~walcubi@p5797A2A5.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:12] * walcubi (~walcubi@p5795BE69.dip0.t-ipconnect.de) has joined #ceph
[6:22] * vata (~vata@96.127.202.136) Quit (Quit: Leaving.)
[6:35] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[6:35] * kefu (~kefu@114.92.125.128) has joined #ceph
[6:41] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[6:44] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[6:45] * kefu (~kefu@114.92.125.128) has joined #ceph
[6:48] * kefu_ (~kefu@114.92.125.128) has joined #ceph
[6:54] * kefu (~kefu@114.92.125.128) Quit (Ping timeout: 480 seconds)
[7:00] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[7:01] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[7:13] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[7:16] * ircolle1 (~Adium@2601:285:201:633a:b4a3:b89d:6ced:ac25) has joined #ceph
[7:19] * ircolle (~Adium@2601:285:201:633a:b4a3:b89d:6ced:ac25) Quit (Read error: Connection reset by peer)
[7:20] * ivve (~zed@131.red-212-170-59.staticip.rima-tde.net) has joined #ceph
[7:26] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[7:28] * ivve (~zed@131.red-212-170-59.staticip.rima-tde.net) Quit (Ping timeout: 480 seconds)
[7:37] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:40] * fastlife2042 (~fastlife2@122-118-145-85.ftth.glasoperator.nl) has joined #ceph
[7:49] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[7:54] * kuku (~kuku@119.93.91.136) has joined #ceph
[7:58] * rwheeler (~rwheeler@bzq-84-111-170-30.cablep.bezeqint.net) Quit (Quit: Leaving)
[7:59] * rdas (~rdas@121.244.87.116) has joined #ceph
[8:21] * jamespag` (~jamespage@culvain.gromper.net) Quit (Read error: Connection reset by peer)
[8:21] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[8:24] * topro (~prousa@p578af414.dip0.t-ipconnect.de) has joined #ceph
[8:28] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[8:34] * rdas (~rdas@121.244.87.116) Quit (Ping timeout: 480 seconds)
[8:36] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) has joined #ceph
[8:39] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[8:43] * rdas (~rdas@121.244.87.116) has joined #ceph
[8:52] <peetaur2> jiffe: maybe a bad disk? (is it a consumer sata disk?)
[8:53] <peetaur2> stupidnic: I suggest checking out the manual way to deploy, and then you can see more specifically what is failing
[8:54] <peetaur2> like I had a problem with ceph-deploy with mds... and looked into the python code and could not figure out how it could *ever* work...and then had to hardcode some stuff to fix it; and then wrote a manual way to do it based on that (which I committed to the docs)
[8:54] * Amto_res_ (~amto_res@ks312256.kimsufi.com) Quit (Remote host closed the connection)
[8:55] <peetaur2> one of the bugs ran into was http://tracker.ceph.com/issues/16443 FYI
[9:11] * Goodi (~Hannu@85-76-98-131-nat.elisa-mobile.fi) has joined #ceph
[9:22] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) has joined #ceph
[9:23] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) Quit ()
[9:32] * rmart04 (~rmart04@support.memset.com) has joined #ceph
[9:38] * sphinxx (~sphinxx@41.217.204.74) has joined #ceph
[9:39] * flisky (~Thunderbi@106.38.61.185) has joined #ceph
[9:39] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[9:42] * flisky (~Thunderbi@106.38.61.185) Quit ()
[9:44] * flisky (~Thunderbi@106.38.61.184) has joined #ceph
[9:45] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[9:47] * ggarg (~ggarg@host-82-135-29-34.customer.m-online.net) has joined #ceph
[9:57] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) has joined #ceph
[9:57] * kuku (~kuku@119.93.91.136) Quit (Remote host closed the connection)
[9:58] * raphaelsc (~raphaelsc@179.187.140.226) Quit (Remote host closed the connection)
[10:02] * fsimonce (~simon@host98-71-dynamic.1-87-r.retail.telecomitalia.it) has joined #ceph
[10:05] * TMM (~hp@dhcp-077-248-009-229.chello.nl) Quit (Quit: Ex-Chat)
[10:07] * flisky (~Thunderbi@106.38.61.184) Quit (Quit: flisky)
[10:12] * DanFoster (~Daniel@2a00:1ee0:3:1337:d884:4483:1ff3:377c) has joined #ceph
[10:16] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[10:16] * Kurt (~Adium@2001:628:1:5:e460:44e2:5a4f:a9db) Quit (Quit: Leaving.)
[10:16] * Kurt (~Adium@2001:628:1:5:5c89:d1c8:6eb0:551b) has joined #ceph
[10:17] * sphinxx (~sphinxx@41.217.204.74) Quit (Ping timeout: 480 seconds)
[10:21] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[10:26] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[10:29] * sphinxx (~sphinxx@41.217.204.74) has joined #ceph
[10:37] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[10:39] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[10:45] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[10:49] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:b52e:d52c:63df:d5a8) has joined #ceph
[10:50] * TMM (~hp@185.5.121.201) has joined #ceph
[10:51] * lixiaoy1 (~lixiaoy1@shzdmzpr01-ext.sh.intel.com) Quit ()
[10:59] * aj__ (~aj@x4db2ab28.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[11:00] * Hemanth (~hkumar_@125.16.34.66) has joined #ceph
[11:01] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[11:02] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[11:03] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[11:04] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[11:06] * Lokta (~Lokta@carbon.coe.int) has joined #ceph
[11:07] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[11:12] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[11:12] <peetaur2> how would one do incremental backups of cephfs if you're not supposed to use cephfs snaps in production? would rbd snaps work, or not because you have 2 pools (data+metdata) instead of 1 so it's not atomic?
[11:12] * bauruine (~bauruine@mail.tuxli.ch) Quit (Quit: ZNC - http://znc.in)
[11:13] * bauruine (~bauruine@2a01:4f8:130:8285:fefe::36) has joined #ceph
[11:16] <T1w> cephfs and rbd are 2 different ways of storing data in ceph - you cannot use rbd snapshots for anything cephfs related
[11:16] <T1w> cephfs stores filedata and metadata directly in RADOS in a data and metadata pool
[11:16] <T1w> rbd stores data in another pool
[11:18] <peetaur2> I figure rbd is somehow thinking it's an rbd pool... I can use ls like rbd ls cephfs_data but there are no images
[11:18] <T1w> well.. of course
[11:18] <peetaur2> so is there a rados snapshot incremental backup then?
[11:18] <T1w> rbd commands take a pool as an argument
[11:19] <peetaur2> ok but a pool is not an rbd pool but a rados pool, so it looks like it works...
[11:19] <T1w> and there is nothing that prevents you from storing RBDs in any pool as long as permissions are given
[11:20] <T1w> pools are a low level abstraction used to avoid the objects that form RBDs and the similar objects that forms cephfs to be mixed
[11:21] * ivve (~zed@131.red-212-170-59.staticip.rima-tde.net) has joined #ceph
[11:22] <T1w> rados commands permit access to the base objects while rbd commands allows access to the rbd layer that sits on top of rados .. and similar to cephfs
[11:23] <T1w> an incrementail backup of cephfs would require a walk through the entire filesystem - just like any other filesystem
[11:23] <T1w> regardless of the possibility of using snapshots
[11:24] * m8x (~user@182.150.27.112) has left #ceph
[11:28] * nilez (~nilez@96.44.147.98) Quit (Ping timeout: 480 seconds)
[11:28] * nilez (~nilez@96.44.144.42) has joined #ceph
[11:34] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[11:36] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[11:40] * Alexey_Abashkin (~AlexeyAba@91.207.132.67) Quit (Quit: Leaving)
[11:43] <peetaur2> not zfs or btrfs ;)
[11:43] <peetaur2> or rbd
[11:44] <peetaur2> and since it's so trivial on those, I would think a thing like cephfs should also be like that, not like rsync
[11:49] * kuku (~kuku@112.203.19.176) has joined #ceph
[11:52] <Be-El> there is snapshot support in cephfs, similar to the zfs snapshot implementation (creating a special subdirectory)
[11:53] <Be-El> but that feature is experimental, buggy, and will likely eat all your data
[11:54] <peetaur2> yes and that is what my question is based on... just wondering how would I make some atomic snapshot+send style backup instead of rsync or ceph replication
[11:55] <peetaur2> ceph osd size = ... replication...? what do you call that in ceph?
[11:55] <peetaur2> pool size rather
[11:56] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[11:57] <Be-El> without snapshot support in cephfs, you are limited to the same tools and procedures you use with any other file system without snapshot support
[11:58] <Be-El> that part and the multi-mds support are the last big features missing in cephfs imho
[11:59] <peetaur2> I don't even mind single mds... at my small scale I would only want a hot standby
[11:59] * ivve (~zed@131.red-212-170-59.staticip.rima-tde.net) Quit (Ping timeout: 480 seconds)
[11:59] <peetaur2> but backup is important, and I'm spoiled with how trivial it is on zfs to do backup of 65TB every 20min, so I want something like that here
[11:59] <peetaur2> I'm trying to get rid of some big zfs 65+TB nfs servers that are annoying to maintain
[12:00] <peetaur2> also thought about nfs on rbd like http://www.sebastien-han.fr/blog/2012/07/06/nfs-over-rbd/
[12:01] <Be-El> if you just reexport cephfs via nfs, using an rbd image with the correct filesystem might be the better solution
[12:01] <Be-El> cephfs with a single client (the nfs server) is a waste of resources
[12:02] <peetaur2> yeah if I used nfs it would be with rbd, or cephfs would be direct clients or maybe samba for one of them
[12:03] <peetaur2> resizing rbd is easy, so for the samba one I could just use rbd
[12:03] <peetaur2> but also I don't like nfs... not just the non-redundant problem, but also stupid problems like the "stale nfs file handle" thing where you can't even umount -l and everyone online says to reboot to fix it (but I just mv the parent dir away, move its children to a new dir without the broken mount, and then remount in the new place.....then wait a week and umount wiwll work :D)
[12:05] * hybrid512 (~walid@195.200.189.206) has joined #ceph
[12:05] * ivve (~zed@131.red-212-170-59.staticip.rima-tde.net) has joined #ceph
[12:05] * kuku (~kuku@112.203.19.176) Quit (Read error: Connection reset by peer)
[12:07] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[12:08] * kefu_ (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[12:08] * kefu (~kefu@li1456-173.members.linode.com) has joined #ceph
[12:11] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) has joined #ceph
[12:15] * sphinxx (~sphinxx@41.217.204.74) Quit (Ping timeout: 480 seconds)
[12:16] * JANorman (~JANorman@81.137.246.31) has joined #ceph
[12:17] * sphinxx (~sphinxx@41.217.204.74) has joined #ceph
[12:21] * aj__ (~aj@94.79.172.98) has joined #ceph
[12:21] <JANorman> Hi there, if I have a three monitors setup (e.g. 1.1.1.1, 1.1.1.2 and 1.1.1.3), and you don't care which one you're connecting to, is there a way with rados_connect to specify all three? Should I put load balancer be put in front of them instead?
[12:25] * nilez (~nilez@96.44.144.42) Quit (Ping timeout: 480 seconds)
[12:25] * nilez (~nilez@96.44.145.58) has joined #ceph
[12:27] <peetaur2> JANorman: they seem to specify ceph.conf, not a specific mon in these examples http://docs.ceph.com/docs/master/rados/api/librados-intro/
[12:30] <T1w> JANorman: no, just specify IP's for all monitors - never yse a load balancer in front - it adds a single point of failure
[12:30] <T1w> use even
[12:31] <T1w> if one monitor is unavailable all ceph components (clients, commands etc) will just try another monitor
[12:31] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:35] <peetaur2> speaking of which, how do you set the timeout for that? I find it's really really long and delays things when you want the least delays (trying to fix something)
[12:35] <T1w> you dont
[12:35] <peetaur2> and related (osd not mon) if you have a majority of osds down and you don't even know it yet, run ceph -s, and I don't know how long it takes to tell you... I didn't wait long enough
[12:35] <T1w> (afaik)
[12:36] <peetaur2> (I like to play rough when testing :) )
[12:37] <T1w> then you wait some more
[12:37] <T1w> belive me - if it cannot get hold of requested information you will get an error
[12:38] <JANorman> OK thanks
[12:43] * ReSam (ReSam@catrobat-irc.ist.tu-graz.ac.at) has joined #ceph
[12:54] * nilez (~nilez@96.44.145.58) Quit (Ping timeout: 480 seconds)
[12:54] * nilez (~nilez@96.44.142.194) has joined #ceph
[13:01] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Quit: leaving)
[13:09] * kuku (~kuku@112.203.19.176) has joined #ceph
[13:13] <T1w> hm.. I've got a smallish 3 node cluster with 6 OSDs atm
[13:14] <T1w> I'm pondering growing by adding 2 more nodes, but with 6 or 8 OSDs per node
[13:15] <T1w> this means that the new nodes would hold much more data, so how should I avoid the inherent unbalance this would cause?
[13:18] <T1w> would it be enough and "the right way" to change the OSD weight (not the CRUSH weight!) to 1/3 or 1/4 of the weight the new OSDs in the new nodes would have?
[13:19] <T1w> atm it's 1 for all 6 OSDs
[13:19] <peetaur2> that would also reduce the size used
[13:19] <peetaur2> (right...? I am not sure what crush weight is)
[13:19] <T1w> no, the CRUSH weight should reflect the capacity of the OSD
[13:20] <T1w> eg. 4TB data should have a weight of 4, 6TB 6 etc etc
[13:21] * ivve (~zed@131.red-212-170-59.staticip.rima-tde.net) Quit (Ping timeout: 480 seconds)
[13:21] <T1w> but the OSD weight is an override weight between 1 and 0 that forces CRUSH to recalculate how much that particular OSD is used
[13:23] * aj__ (~aj@94.79.172.98) Quit (Ping timeout: 480 seconds)
[13:23] * kuku (~kuku@112.203.19.176) Quit (Read error: Connection reset by peer)
[13:23] * kuku (~kuku@112.203.19.176) has joined #ceph
[13:30] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:30] <T1w> oh well.. if anyone has an idea or answer to the above at some point I'd be grateful
[13:34] <Goodi> Anyone else have problems setting up (fresh) radosgw on Jewel? It seems that default zone is not properly setup.
[13:36] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:36] <Goodi> I kinda got it working by 1) create pools 2) create default zone 3) start radosgw 4) set the default zonegroup
[13:37] <Be-El> T1w: the data would be distributed according to the host's weight (and within the hosts according to the osd weight)
[13:37] <T1w> Be-El: yes, and the host weight would be a sum of OSD crush weight
[13:38] <T1w> so with size=3 it would be.. troublesome
[13:38] <Be-El> T1w: so if the two new hosts have about the same capacity individually as the existing cluster, one third would be on the first new host, one third on the second new host, and the remainder would be distributed among the existing hosts
[13:38] <T1w> mmm
[13:39] <T1w> actually they would probably be a bit over initial capacity
[13:39] <T1w> (I'm pondering 8 4TB OSDs per node)
[13:39] <T1w> at the moment I've got 6 of those
[13:40] <T1w> that would prbably not be a good thing to do
[13:42] <Be-El> why not?
[13:43] <T1w> at some point the existing 3 nodes with 2 OSDs would not be able to hold 1/3 of the data in the cluster
[13:43] <T1w> if I went down to 6 OSDs in the new nodes they could
[13:44] <Be-El> you will have 200% more capacity, with about 8 TB overprovisioning. doesn't sound that bad too me
[13:44] <Be-El> (and more spindles, more network bandwidth etc.)
[13:45] <T1w> no, but it comes down to at least 16TB (4x 4TB OSDs) raw capacity or 5,3TB useful data that could not be placed properly
[13:47] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Killed (NickServ (Too many failed password attempts.)))
[13:47] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[13:48] <T1w> in theory (with copies=3) it would give me 29,3TB useful space
[13:48] * JANorman (~JANorman@81.137.246.31) Quit (Remote host closed the connection)
[13:49] <T1w> but I should substract 5,3TB from that, so I'm down to 24TB useful in all
[13:49] <T1w> still quite a leap up, but..
[13:49] <Be-El> T1w: if you fill up the cluster to that capacity. more than 60-70% used capacity is a bad idea with ceph
[13:50] <T1w> actually, now I'm fiddling with the numbers I can see that if I go down to 6 OSDs per new node I would end up with the same possible useable space..
[13:50] * kuku (~kuku@112.203.19.176) Quit (Read error: Connection reset by peer)
[13:50] <Be-El> and you might be able to replace the 4 TB drives with 6 or 8 TB drives in the future, given enough budget
[13:51] <T1w> hm, good point (>60-70% capacity)
[13:51] * test-cnj (~test_cnj@62.23.53.114) has joined #ceph
[13:51] <Be-El> you aren't able to put more drives in the existing hosts?
[13:51] <T1w> and yes.. changing those old OSDs to 8TB would be feasable
[13:51] * fastlife2042 (~fastlife2@122-118-145-85.ftth.glasoperator.nl) Quit (Remote host closed the connection)
[13:51] <T1w> no.. physical limits
[13:51] <T1w> 4x 3,5" bays in a 1U machine
[13:51] * fastlife2042 (~fastlife2@122-118-145-85.ftth.glasoperator.nl) has joined #ceph
[13:52] <T1w> the new nodes would be 10x 2.5" bays
[13:52] <Be-El> so you could move the 4 TB drives to the new hosts, and put 6 or 8 tb drives in the old hosts
[13:52] <T1w> more likely just retire the old desktop rated drives after 3+ years of runtime..
[13:52] <Be-El> have a closer look at the hard disk models, you don't want to end up with SMR drives as OSD drives
[13:52] <T1w> oooh no
[13:52] <Be-El> _that_ might be an excellent idea ;-)
[13:53] <T1w> but 8TB non-SMR drives should be possible to get hold of anyway
[13:53] * kuku (~kuku@112.203.19.176) has joined #ceph
[13:53] <T1w> it should not be a problem in a 3.5" version
[13:53] <Be-El> SMR drives might be a good idea for object storage once explicit support is available in ceph
[13:53] <T1w> hm, I might be able to get through with that idea..
[13:53] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[13:53] <T1w> yeah.. well.. not my exact problem.. :)
[13:54] <T1w> we're looking into the possiblity of adding rados access inside our application
[13:54] <T1w> to avoid cephfs as well as the existing RBD+NFS setup we're running now
[13:55] <T1w> we're creating 100.000+ new objects each day that's stored over NFS in several different RBDs
[13:56] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[13:56] <T1w> alas we still need a bit of metadata information about those objects that rados cannot give us directly (a map from intern ID to the object ID) that's easily done within a filesystem in a RBD
[14:00] * fastlife2042 (~fastlife2@122-118-145-85.ftth.glasoperator.nl) Quit (Ping timeout: 480 seconds)
[14:01] <Be-El> doesn't rados provide methods to store xattr/omap key-value pairs with the objects?
[14:02] <Be-El> and storing (supposed immutable) objects sound like a job for s3/radosgw
[14:02] <Be-El> in that case you'll definitely be able to store custom metadata
[14:04] * JANorman (~JANorman@81.137.246.31) has joined #ceph
[14:04] * derjohn_mob (~aj@paketfilter.of.paketvermittlung.de) has joined #ceph
[14:04] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[14:06] * Kurt (~Adium@2001:628:1:5:5c89:d1c8:6eb0:551b) Quit (Quit: Leaving.)
[14:07] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[14:07] * diver (~diver@216.85.162.38) has joined #ceph
[14:10] <sep> is there a command to tell a pg to probe a spesific osd ?
[14:11] <T1w> Be-El: well it's more a question of not having to run through rados looking for a specific object
[14:11] <T1w> of course we could probably just use sensible names
[14:11] <T1w> so given a certain context the name is also given
[14:12] <T1w> as the key to use for retrieval
[14:12] * JANorman (~JANorman@81.137.246.31) Quit (Ping timeout: 480 seconds)
[14:12] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[14:12] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit ()
[14:13] <Be-El> T1w: s3 has indices for buckets and objects, so the lookup might be easier and faster in that case. but there've also been threads on mailing list discussing the lookup speed for large numbers of objects
[14:13] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[14:13] <T1w> mmm
[14:13] <Be-El> T1w: the good point about using librados is the fact that all clients can access the ceph hosts directly. you don't need to have a rados gateway instance (or multiple), that will become the bottleneck
[14:14] <T1w> exactly what we would want to avoid
[14:14] <T1w> also given that our objects are write once read many
[14:14] <Be-El> on the other hand if you use s3 you can use the same software indepently of ceph, e.g. in amazon's cloud
[14:14] <T1w> no.. not going to happen
[14:15] <T1w> data is not something that could be placed in a public cloud
[14:15] <T1w> to sensitive and personal
[14:19] * sankarshan (~sankarsha@122.172.187.5) has joined #ceph
[14:20] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) Quit (Ping timeout: 480 seconds)
[14:22] * dan__ (~Daniel@2a00:1ee0:3:1337:d555:5ac0:f38b:f7ad) has joined #ceph
[14:23] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Ping timeout: 480 seconds)
[14:23] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[14:25] * mattbenjamin1 (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[14:30] * DanFoster (~Daniel@2a00:1ee0:3:1337:d884:4483:1ff3:377c) Quit (Ping timeout: 480 seconds)
[14:36] * test-cnj (~test_cnj@62.23.53.114) Quit ()
[14:37] * diver (~diver@216.85.162.38) Quit (Remote host closed the connection)
[14:37] * diver (~diver@95.85.8.93) has joined #ceph
[14:41] * bniver (~bniver@pool-98-110-180-234.bstnma.fios.verizon.net) has joined #ceph
[14:46] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[14:48] * JANorman (~JANorman@81.137.246.31) has joined #ceph
[14:49] * Racpatel (~Racpatel@2601:87:3:31e3::77ec) Quit (Quit: Leaving)
[14:49] * sphinxx_ (~sphinxx@41.217.204.74) has joined #ceph
[14:50] * Racpatel (~Racpatel@2601:87:3:31e3::77ec) has joined #ceph
[14:50] * derjohn_mob (~aj@paketfilter.of.paketvermittlung.de) Quit (Ping timeout: 480 seconds)
[14:54] * sphinxx (~sphinxx@41.217.204.74) Quit (Ping timeout: 480 seconds)
[15:01] * georgem (~Adium@206.108.127.16) has joined #ceph
[15:03] * derjohn_mob (~aj@b2b-94-79-172-98.unitymedia.biz) has joined #ceph
[15:04] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[15:04] * Szernex1 (~PcJamesy@exit1.radia.tor-relays.net) has joined #ceph
[15:08] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[15:14] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) Quit (Quit: valeech)
[15:17] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:18] * kuku (~kuku@112.203.19.176) Quit (Read error: Connection reset by peer)
[15:19] * kuku (~kuku@112.203.19.176) has joined #ceph
[15:23] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[15:26] * Terry (~Terry@c-98-226-9-210.hsd1.il.comcast.net) Quit (Quit: Going offline, see ya! (www.adiirc.com))
[15:29] * jtw (~john@2601:644:4000:b0bf:308a:a1fb:826f:c47e) has joined #ceph
[15:30] * rwheeler (~rwheeler@bzq-84-111-170-30.red.bezeqint.net) has joined #ceph
[15:33] * valeech (~valeech@166.170.34.204) has joined #ceph
[15:34] * Szernex1 (~PcJamesy@2RTAAAKSF.tor-irc.dnsbl.oftc.net) Quit ()
[15:37] * valeech (~valeech@166.170.34.204) Quit ()
[15:37] * JANorman_ (~JANorman@81.137.246.31) has joined #ceph
[15:39] * shaunm (~shaunm@cpe-192-180-17-174.kya.res.rr.com) has joined #ceph
[15:44] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) has joined #ceph
[15:44] * JANorman (~JANorman@81.137.246.31) Quit (Ping timeout: 480 seconds)
[15:45] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[15:55] * salwasser (~Adium@72.246.3.14) has joined #ceph
[15:55] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[15:56] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Quit: ZNC 1.6.3 - http://znc.in)
[15:57] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[15:58] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[15:59] * JANorman_ (~JANorman@81.137.246.31) Quit (Read error: Connection reset by peer)
[15:59] * JANorman (~JANorman@81.137.246.31) has joined #ceph
[15:59] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[16:00] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[16:04] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[16:08] * squizzi (~squizzi@107.13.237.240) has joined #ceph
[16:09] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:10] * jowilkin (~jowilkin@184-23-213-254.fiber.dynamic.sonic.net) Quit (Remote host closed the connection)
[16:12] * sankarshan (~sankarsha@122.172.187.5) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[16:34] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[16:35] * debian1121 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[16:35] * kristen (~kristen@134.134.139.77) has joined #ceph
[16:36] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[16:37] * MikePar (~mparson@neener.bl.org) has joined #ceph
[16:37] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[16:43] * vata (~vata@207.96.182.162) has joined #ceph
[16:44] * kefu_ (~kefu@114.92.125.128) has joined #ceph
[16:48] * kefu (~kefu@li1456-173.members.linode.com) Quit (Ping timeout: 480 seconds)
[16:49] * PaulN (~PaulN@12.139.6.226) has joined #ceph
[16:50] * Hemanth (~hkumar_@125.16.34.66) Quit (Quit: Leaving)
[16:51] * Racpatel (~Racpatel@2601:87:3:31e3::77ec) Quit (Ping timeout: 480 seconds)
[16:54] * bara (~bara@ip4-83-240-10-82.cust.nbox.cz) Quit (Remote host closed the connection)
[16:56] * derjohn_mob (~aj@b2b-94-79-172-98.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[16:57] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[17:00] * kefu_ (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[17:01] * kefu (~kefu@114.92.125.128) has joined #ceph
[17:01] * georgem1 (~Adium@206.108.127.16) has joined #ceph
[17:01] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[17:04] * PaulN (~PaulN@12.139.6.226) Quit ()
[17:06] * wushudoin (~wushudoin@2601:646:8200:c9f0:2ab2:bdff:fe0b:a6ee) has joined #ceph
[17:13] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[17:14] * kefu (~kefu@114.92.125.128) has joined #ceph
[17:16] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[17:16] * Goodi (~Hannu@85-76-98-131-nat.elisa-mobile.fi) Quit (Quit: This computer has gone to sleep)
[17:16] * yanzheng1 (~zhyan@118.116.115.45) Quit (Quit: This computer has gone to sleep)
[17:17] * yanzheng1 (~zhyan@118.116.115.45) has joined #ceph
[17:17] * yanzheng1 (~zhyan@118.116.115.45) Quit (Remote host closed the connection)
[17:18] * kefu is now known as kefu|afk
[17:19] * xarses (~xarses@64.124.158.3) has joined #ceph
[17:20] * kefu|afk (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:22] <JANorman> I'm trying to put an object using the Rados CLI interface into a pool that has hyphens, and it hangs constantly. But when I do the same against a pool that doesn't have hyphen in it with the same profile it works fine. Should hypens be avoided in pool names?
[17:22] * DJComet (~SEBI@46.166.138.167) has joined #ceph
[17:23] * fastlife2042 (~fastlife2@122-118-145-85.ftth.glasoperator.nl) has joined #ceph
[17:24] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[17:30] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[17:32] * mattbenjamin1 (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[17:34] * TMM (~hp@185.5.121.201) Quit (Remote host closed the connection)
[17:35] * mattbenjamin1 (~mbenjamin@12.118.3.106) has joined #ceph
[17:37] * TMM (~hp@185.5.121.201) has joined #ceph
[17:37] * kuku (~kuku@112.203.19.176) Quit (Read error: Connection reset by peer)
[17:37] * kuku (~kuku@112.203.19.176) has joined #ceph
[17:43] * Concubidated (~cube@68.140.239.164) has joined #ceph
[17:47] * jowilkin (~jowilkin@184-23-213-254.fiber.dynamic.sonic.net) has joined #ceph
[17:47] * fastlife2042 (~fastlife2@122-118-145-85.ftth.glasoperator.nl) Quit (Remote host closed the connection)
[17:49] * rmart04 (~rmart04@support.memset.com) Quit (Ping timeout: 480 seconds)
[17:51] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:52] * DJComet (~SEBI@46.166.138.167) Quit ()
[17:56] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[17:57] * hybrid512 (~walid@195.200.189.206) Quit (Remote host closed the connection)
[17:59] * fastlife2042 (~fastlife2@122-118-145-85.ftth.glasoperator.nl) has joined #ceph
[18:00] * xinli (~charleyst@32.97.110.52) has joined #ceph
[18:01] * linuxkidd (~linuxkidd@ip70-189-202-62.lv.lv.cox.net) has joined #ceph
[18:02] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[18:02] * kuku (~kuku@112.203.19.176) Quit (Read error: Connection reset by peer)
[18:03] * kuku (~kuku@112.203.19.176) has joined #ceph
[18:04] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[18:06] * sphinxx (~sphinxx@41.217.204.74) has joined #ceph
[18:08] * borei (~dan@216.13.217.230) has joined #ceph
[18:12] * jowilkin (~jowilkin@184-23-213-254.fiber.dynamic.sonic.net) Quit (Ping timeout: 480 seconds)
[18:13] * sphinxx_ (~sphinxx@41.217.204.74) Quit (Ping timeout: 480 seconds)
[18:17] * sphinxx (~sphinxx@41.217.204.74) Quit (Ping timeout: 480 seconds)
[18:18] * ntpttr (~ntpttr@134.134.139.72) has joined #ceph
[18:20] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[18:20] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[18:22] * fastlife2042 (~fastlife2@122-118-145-85.ftth.glasoperator.nl) Quit (Remote host closed the connection)
[18:23] * diver_ (~diver@216.85.162.34) has joined #ceph
[18:23] * fastlife2042 (~fastlife2@122-118-145-85.ftth.glasoperator.nl) has joined #ceph
[18:23] * georgem1 (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[18:23] * georgem (~Adium@206.108.127.16) has joined #ceph
[18:25] * vbellur (~vijay@71.234.224.255) Quit (Remote host closed the connection)
[18:28] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) has joined #ceph
[18:29] * diver (~diver@95.85.8.93) Quit (Ping timeout: 480 seconds)
[18:31] * fastlife2042 (~fastlife2@122-118-145-85.ftth.glasoperator.nl) Quit (Ping timeout: 480 seconds)
[18:31] * xinli (~charleyst@32.97.110.52) Quit (Remote host closed the connection)
[18:31] * xinli (~charleyst@32.97.110.52) has joined #ceph
[18:34] * dan__ (~Daniel@2a00:1ee0:3:1337:d555:5ac0:f38b:f7ad) Quit (Quit: Leaving)
[18:35] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[18:36] * debian1121 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[18:42] * rwheeler (~rwheeler@bzq-84-111-170-30.red.bezeqint.net) Quit (Quit: Leaving)
[18:42] * johnavp1989 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[18:44] * diver_ (~diver@216.85.162.34) Quit (Remote host closed the connection)
[18:45] * diver (~diver@95.85.8.93) has joined #ceph
[18:47] * JANorman (~JANorman@81.137.246.31) Quit (Ping timeout: 480 seconds)
[18:48] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[18:48] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[18:53] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[18:53] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[18:55] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) has joined #ceph
[18:56] * jowilkin (~jowilkin@h-74-1-189-182.snfc.ca.globalcapacity.com) has joined #ceph
[18:59] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[18:59] * kefu (~kefu@114.92.125.128) has joined #ceph
[19:02] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) Quit (Ping timeout: 480 seconds)
[19:03] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[19:10] * CampGareth (~Max@149.18.114.222) has joined #ceph
[19:11] * jowilkin (~jowilkin@h-74-1-189-182.snfc.ca.globalcapacity.com) Quit (Ping timeout: 480 seconds)
[19:12] * kefu is now known as kefu|afk
[19:12] * kuku (~kuku@112.203.19.176) Quit (Read error: Connection reset by peer)
[19:13] * kuku (~kuku@112.203.19.176) has joined #ceph
[19:22] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:b52e:d52c:63df:d5a8) Quit (Ping timeout: 480 seconds)
[19:22] * mykola (~Mikolaj@91.245.78.204) has joined #ceph
[19:27] * malevolent (~quassel@192.146.172.118) Quit (Quit: No Ping reply in 180 seconds.)
[19:28] * malevolent (~quassel@192.146.172.118) has joined #ceph
[19:29] <CampGareth> Ceph on ARM is giving me trouble, seg faults specifically
[19:30] * xinli (~charleyst@32.97.110.52) Quit (Ping timeout: 480 seconds)
[19:30] * Racpatel (~Racpatel@2601:87:3:31e3::77ec) has joined #ceph
[19:31] * Racpatel (~Racpatel@2601:87:3:31e3::77ec) Quit ()
[19:31] <CampGareth> A new Ceph OSD node joins the cluster, nothing special, logs look to my eye like the norm for a new node, then out of the blue it segfaults in thread ms_pipe_read
[19:31] * Racpatel (~Racpatel@2601:87:3:31e3::77ec) has joined #ceph
[19:33] * vbellur (~vijay@71.234.224.255) has joined #ceph
[19:34] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Remote host closed the connection)
[19:36] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:b52e:d52c:63df:d5a8) has joined #ceph
[19:42] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[19:43] * rmart04 (~rmart04@host86-185-106-132.range86-185.btcentralplus.com) has joined #ceph
[19:44] * rmart04 (~rmart04@host86-185-106-132.range86-185.btcentralplus.com) Quit ()
[19:49] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[19:51] * kuku (~kuku@112.203.19.176) Quit (Remote host closed the connection)
[19:53] * kuku (~kuku@112.203.19.176) has joined #ceph
[19:57] * kuku (~kuku@112.203.19.176) Quit (Read error: Connection reset by peer)
[19:57] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[20:00] * KindOne (kindone@h26.226.28.71.dynamic.ip.windstream.net) has joined #ceph
[20:01] * squizzi_ (~squizzi@107.13.237.240) has joined #ceph
[20:06] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:b52e:d52c:63df:d5a8) Quit (Ping timeout: 480 seconds)
[20:06] * skarn (skarn@0001f985.user.oftc.net) Quit (Quit: ZNC - http://znc.in)
[20:14] * salwasser (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[20:15] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[20:16] * jowilkin (~jowilkin@2601:648:8003:2000:ea2a:eaff:fe08:3f1d) has joined #ceph
[20:17] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[20:19] * TMM (~hp@dhcp-077-248-009-229.chello.nl) has joined #ceph
[20:28] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[20:29] * shaunm (~shaunm@cpe-192-180-17-174.kya.res.rr.com) Quit (Ping timeout: 480 seconds)
[20:29] * JANorman (~JANorman@host86-185-140-3.range86-185.btcentralplus.com) has joined #ceph
[20:30] * kefu|afk (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:30] * mykola (~Mikolaj@91.245.78.204) Quit (Read error: Connection reset by peer)
[20:31] * mykola (~Mikolaj@91.245.78.204) has joined #ceph
[20:37] * JANorman (~JANorman@host86-185-140-3.range86-185.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[20:41] * sphinxx (~sphinxx@41.217.27.146) has joined #ceph
[20:43] * jtw (~john@2601:644:4000:b0bf:308a:a1fb:826f:c47e) Quit (Ping timeout: 480 seconds)
[20:43] * sphinxx_ (~sphinxx@41.217.204.74) has joined #ceph
[20:46] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[20:47] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[20:48] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) has joined #ceph
[20:49] * sphinxx (~sphinxx@41.217.27.146) Quit (Ping timeout: 480 seconds)
[20:56] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[21:03] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[21:05] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[21:08] * Gecko1986 (~QuantumBe@64.137.196.24) has joined #ceph
[21:12] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[21:13] * georgem (~Adium@206.108.127.16) has left #ceph
[21:21] * LegalResale (~LegalResa@66.165.126.130) Quit (Quit: Leaving)
[21:23] * Lokta (~Lokta@carbon.coe.int) Quit (Ping timeout: 480 seconds)
[21:23] * LegalResale (~LegalResa@66.165.126.130) has joined #ceph
[21:30] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) has joined #ceph
[21:30] * brians (~brian@80.111.114.175) Quit (Quit: Textual IRC Client: www.textualapp.com)
[21:38] * Gecko1986 (~QuantumBe@635AAAR70.tor-irc.dnsbl.oftc.net) Quit ()
[21:42] * sphinxx (~sphinxx@41.217.27.146) has joined #ceph
[21:42] * mq (~oftc-webi@24.227.4.129) has joined #ceph
[21:43] <mq> RBD shared between clients is possible ?
[21:44] * sphinxx__ (~sphinxx@41.217.204.74) has joined #ceph
[21:46] <mq> Mounting a shared block device on multiple hosts is possible and how to ?
[21:48] * georgem (~Adium@206.108.127.16) has joined #ceph
[21:49] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[21:49] * sphinxx_ (~sphinxx@41.217.204.74) Quit (Ping timeout: 480 seconds)
[21:50] * sphinxx (~sphinxx@41.217.27.146) Quit (Ping timeout: 480 seconds)
[21:53] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Read error: No route to host)
[21:54] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[21:59] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[22:02] * mykola (~Mikolaj@91.245.78.204) Quit (Ping timeout: 480 seconds)
[22:02] * kristen (~kristen@134.134.139.77) Quit (Quit: Leaving)
[22:16] * mq (~oftc-webi@24.227.4.129) Quit (Quit: Page closed)
[22:16] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[22:20] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:22] * rmart04 (~rmart04@host86-185-106-132.range86-185.btcentralplus.com) has joined #ceph
[22:24] * rmart04 (~rmart04@host86-185-106-132.range86-185.btcentralplus.com) Quit ()
[22:27] * mlovell (~mlovell@69-195-66-94.unifiedlayer.com) Quit (Quit: Leaving.)
[22:27] * mlovell (~mlovell@69-195-66-94.unifiedlayer.com) has joined #ceph
[22:27] <flaf> Hi. Ceph 10.2.3 seems released. :)
[22:28] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[22:29] * xarses (~xarses@64.124.158.3) Quit (Remote host closed the connection)
[22:30] * xarses (~xarses@64.124.158.3) has joined #ceph
[22:32] * diver (~diver@95.85.8.93) Quit (Ping timeout: 480 seconds)
[22:36] * Jeffrey4l__ (~Jeffrey@101.31.233.114) has joined #ceph
[22:36] * mykola (~Mikolaj@91.245.77.90) has joined #ceph
[22:38] * Jeffrey4l_ (~Jeffrey@110.244.236.101) Quit (Ping timeout: 480 seconds)
[22:38] * linuxkidd (~linuxkidd@ip70-189-202-62.lv.lv.cox.net) Quit (Quit: Leaving)
[22:39] * mykola (~Mikolaj@91.245.77.90) Quit (Read error: No route to host)
[22:39] * mykola (~Mikolaj@91.245.77.90) has joined #ceph
[22:40] * xarses (~xarses@64.124.158.3) Quit (Ping timeout: 480 seconds)
[22:42] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[22:42] * sphinxx (~sphinxx@41.217.27.146) has joined #ceph
[22:43] * yuastnav (~Thayli@91.108.183.42) has joined #ceph
[22:45] * sphinxx_ (~sphinxx@41.217.204.74) has joined #ceph
[22:49] * sphinxx__ (~sphinxx@41.217.204.74) Quit (Ping timeout: 480 seconds)
[22:49] * CampGareth (~Max@149.18.114.222) Quit (Quit: Leaving)
[22:50] * sphinxx (~sphinxx@41.217.27.146) Quit (Ping timeout: 480 seconds)
[22:55] * Snowcat4 (~Skyrider@108.61.122.50) has joined #ceph
[22:55] * ntpttr_ (~ntpttr@134.134.139.76) has joined #ceph
[22:56] * mattbenjamin1 (~mbenjamin@12.118.3.106) Quit (Ping timeout: 480 seconds)
[23:02] * davidzlap (~Adium@2605:e000:1313:8003:c03b:403a:dc53:e2d3) has joined #ceph
[23:06] * georgem (~Adium@24.114.48.171) has joined #ceph
[23:07] * georgem (~Adium@24.114.48.171) Quit ()
[23:07] * georgem (~Adium@206.108.127.16) has joined #ceph
[23:10] * JANorman (~JANorman@host86-185-140-3.range86-185.btcentralplus.com) has joined #ceph
[23:13] * yuastnav (~Thayli@91.108.183.42) Quit ()
[23:13] * JANorman (~JANorman@host86-185-140-3.range86-185.btcentralplus.com) Quit (Remote host closed the connection)
[23:15] * mykola (~Mikolaj@91.245.77.90) Quit (Quit: away)
[23:15] * brians (~brian@80.111.114.175) has joined #ceph
[23:19] * salwasser (~Adium@2601:197:101:5cc1:71c4:643b:518e:b9cf) has joined #ceph
[23:23] * salwasser1 (~Adium@c-76-118-229-231.hsd1.ma.comcast.net) has joined #ceph
[23:25] * Snowcat4 (~Skyrider@108.61.122.50) Quit ()
[23:30] * JANorman (~JANorman@host86-185-140-3.range86-185.btcentralplus.com) has joined #ceph
[23:30] * salwasser (~Adium@2601:197:101:5cc1:71c4:643b:518e:b9cf) Quit (Ping timeout: 480 seconds)
[23:35] * mq (~oftc-webi@mobile-107-77-168-24.mobile.att.net) has joined #ceph
[23:35] * fsimonce (~simon@host98-71-dynamic.1-87-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[23:36] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Quit: leaving)
[23:39] * Racpatel (~Racpatel@2601:87:3:31e3::77ec) Quit (Ping timeout: 480 seconds)
[23:40] * georgem (~Adium@206.108.127.16) Quit (Ping timeout: 480 seconds)
[23:42] * sphinxx (~sphinxx@41.217.27.146) has joined #ceph
[23:42] * natarej (~natarej@101.188.54.14) has joined #ceph
[23:44] * sphinxx__ (~sphinxx@41.217.204.74) has joined #ceph
[23:45] <mq> I can now map same image from two different clients and run IO in active-active mode ?
[23:45] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[23:47] * JANorman (~JANorman@host86-185-140-3.range86-185.btcentralplus.com) Quit (Quit: Leaving...)
[23:48] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:50] * sphinxx_ (~sphinxx@41.217.204.74) Quit (Ping timeout: 480 seconds)
[23:50] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[23:50] * sphinxx (~sphinxx@41.217.27.146) Quit (Ping timeout: 480 seconds)
[23:51] <natarej> does cephs implementation of erasure code have I/O amplification? eg for k=4 and m=2 is there an IO overhead of 3 reads 3 writes as there would be for raid 6?
[23:52] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[23:53] * johnavp19891 (~jpetrini@pool-100-14-10-2.phlapa.fios.verizon.net) has joined #ceph
[23:53] <- *johnavp19891* To prove that you are human, please enter the result of 8+3
[23:54] * ntpttr_ (~ntpttr@134.134.139.76) Quit (Remote host closed the connection)
[23:55] * davidzlap (~Adium@2605:e000:1313:8003:c03b:403a:dc53:e2d3) Quit (Ping timeout: 480 seconds)
[23:57] <mq> RBD shared between ceph clients is possible /
[23:57] <mq> ????
[23:57] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[23:59] <Concubidated> mq: you would need to be running a shared filesystem like ocbfs2 on the RBD image if you wanted to maintain consistency though and have write access to multiple RBD clients. Probably would make more sense to just run cephfs.

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.