#ceph IRC Log

Index

IRC Log for 2016-09-26

Timestamps are in GMT/BST.

[0:15] * vend3r (~Roy@tor.1337.la) Quit ()
[0:22] * rendar (~I@95.234.176.173) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:32] <FidoNet> is there a limit on the number of files you can have in a directory with ceph_fs ? (and is it configurable?)
[0:44] * wushudoin (~wushudoin@2601:646:8200:c9f0:2ab2:bdff:fe0b:a6ee) Quit (Quit: Leaving)
[0:56] * Unforgiven (~Sketchfil@exit0.radia.tor-relays.net) has joined #ceph
[1:10] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:13] * vegas3 (~Misacorp@108.61.123.73) has joined #ceph
[1:26] * Unforgiven (~Sketchfil@exit0.radia.tor-relays.net) Quit ()
[1:31] * [0x4A6F]_ (~ident@p508CDE85.dip0.t-ipconnect.de) has joined #ceph
[1:33] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:33] * [0x4A6F]_ is now known as [0x4A6F]
[1:34] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[1:35] * oms101 (~oms101@p20030057EA377600C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:37] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) has joined #ceph
[1:43] * vegas3 (~Misacorp@108.61.123.73) Quit ()
[1:43] * Frostshifter (~Catsceo@tor2r.ins.tor.net.eu.org) has joined #ceph
[1:44] * oms101 (~oms101@p20030057EA00A400C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:47] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[2:04] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) Quit (Ping timeout: 480 seconds)
[2:13] * Frostshifter (~Catsceo@tor2r.ins.tor.net.eu.org) Quit ()
[2:40] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) has joined #ceph
[2:48] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[2:50] * mq (~oftc-webi@adsl-64-237-227-185.prtc.net) has joined #ceph
[2:51] <mq> How to mount a cephfs in ubuntu
[2:51] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) has joined #ceph
[2:59] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[3:04] * jfaj_ (~jan@p4FC5B96A.dip0.t-ipconnect.de) has joined #ceph
[3:10] * salwasser (~Adium@2601:197:101:5cc1:31a1:161d:64a9:b108) has joined #ceph
[3:11] * jfaj (~jan@p4FC5B1E8.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:13] * aj__ (~aj@x590c6c92.dyn.telefonica.de) has joined #ceph
[3:20] * derjohn_mobi (~aj@x590e64ea.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[3:20] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:22] * EinstCra_ (~EinstCraz@58.247.119.250) has joined #ceph
[3:28] * nupanick (~Nephyrin@213.61.149.100) has joined #ceph
[3:28] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[3:29] * salwasser (~Adium@2601:197:101:5cc1:31a1:161d:64a9:b108) Quit (Quit: Leaving.)
[3:33] * yanzheng1 (~zhyan@125.70.21.68) has joined #ceph
[3:43] * yanzheng1 (~zhyan@125.70.21.68) Quit (Quit: This computer has gone to sleep)
[3:50] * yanzheng1 (~zhyan@125.70.21.68) has joined #ceph
[3:51] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) has joined #ceph
[3:58] * nupanick (~Nephyrin@213.61.149.100) Quit ()
[4:00] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[4:03] * wkennington (~wakIII@0001bde8.user.oftc.net) has joined #ceph
[4:06] * lixiaoy1 (~lixiaoy1@shzdmzpr02-ext.sh.intel.com) has joined #ceph
[4:09] * mq (~oftc-webi@adsl-64-237-227-185.prtc.net) Quit (Remote host closed the connection)
[4:16] * Gecko1986 (~PierreW@185.65.134.75) has joined #ceph
[4:29] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[4:46] * Gecko1986 (~PierreW@185.65.134.75) Quit ()
[4:48] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[5:00] * johnavp1989 (~jpetrini@pool-100-34-191-134.phlapa.fios.verizon.net) has left #ceph
[5:09] * Vacuum__ (~Vacuum@88.130.192.63) has joined #ceph
[5:16] * Vacuum_ (~Vacuum@88.130.206.182) Quit (Ping timeout: 480 seconds)
[5:17] * mitchty (~quassel@130-245-47-212.rev.cloud.scaleway.com) Quit (Remote host closed the connection)
[5:17] * mitchty (~quassel@130-245-47-212.rev.cloud.scaleway.com) has joined #ceph
[5:19] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[5:19] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[5:20] * johnavp1989 (~jpetrini@8.39.115.8) has left #ceph
[5:30] * flisky (~Thunderbi@106.38.61.184) has joined #ceph
[5:35] * leseb (~leseb@81-64-223-102.rev.numericable.fr) has joined #ceph
[5:35] * vimal (~vikumar@114.143.162.32) has joined #ceph
[5:48] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[5:55] * vimal (~vikumar@114.143.162.32) Quit (Quit: Leaving)
[6:03] * Corti^carte (~SEBI@tor2r.ins.tor.net.eu.org) has joined #ceph
[6:11] * walcubi (~walcubi@p5797A26F.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:12] * walcubi (~walcubi@p5795BD21.dip0.t-ipconnect.de) has joined #ceph
[6:23] * vimal (~vikumar@121.244.87.116) has joined #ceph
[6:31] * aj__ (~aj@x590c6c92.dyn.telefonica.de) Quit (Remote host closed the connection)
[6:33] * Corti^carte (~SEBI@tor2r.ins.tor.net.eu.org) Quit ()
[6:46] * jclm (~jclm@AVelizy-151-1-8-50.w82-120.abo.wanadoo.fr) has joined #ceph
[6:46] * jclm (~jclm@AVelizy-151-1-8-50.w82-120.abo.wanadoo.fr) Quit ()
[6:51] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[7:08] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[7:13] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:26] * derjohn_mob (~aj@tmo-101-34.customers.d1-online.com) has joined #ceph
[7:26] * flisky (~Thunderbi@106.38.61.184) Quit (Quit: flisky)
[7:27] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:28] <FidoNet> mq are you stil there ?
[7:35] * Goodi (~Hannu@office.proact.fi) has joined #ceph
[7:38] * fastlife2042 (~fastlife2@84.241.212.73) has joined #ceph
[7:51] * karnan (~karnan@125.16.34.66) has joined #ceph
[8:01] * Kurt (~Adium@2001:628:1:5:f5b4:9d16:b4e5:e14c) has joined #ceph
[8:07] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has joined #ceph
[8:08] * wkennington (~wakIII@0001bde8.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:20] * fastlife2042 (~fastlife2@84.241.212.73) Quit (Remote host closed the connection)
[8:26] * Pulp (~Pulp@63-221-50-195.dyn.estpak.ee) Quit (Read error: Connection reset by peer)
[8:32] * Concubidated (~cube@c-73-12-218-131.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[8:33] * briner (~briner@2001:620:600:1000:38d3:f52a:5239:12f3) has joined #ceph
[8:34] * derjohn_mob (~aj@tmo-101-34.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[8:44] * fastlife2042 (~fastlife2@mta.comparegroup.eu) has joined #ceph
[8:46] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:47] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:48] * GuntherDW (~Eric@exit0.radia.tor-relays.net) has joined #ceph
[8:56] * branto (~branto@178.253.162.116) has joined #ceph
[8:58] * analbeard (~shw@host86-142-132-208.range86-142.btcentralplus.com) has joined #ceph
[9:05] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[9:11] * branto1 (~branto@178.253.162.116) has joined #ceph
[9:11] * Hidendra (~Tumm@exit0.radia.tor-relays.net) has joined #ceph
[9:15] * derjohn_mob (~aj@46.189.28.56) has joined #ceph
[9:17] * Sue_ (~sue@2601:204:c600:d638:6600:6aff:fe4e:4542) has joined #ceph
[9:17] * branto (~branto@178.253.162.116) Quit (Ping timeout: 480 seconds)
[9:18] * erwan_taf (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[9:18] * GuntherDW (~Eric@exit0.radia.tor-relays.net) Quit ()
[9:32] * jklare (~jklare@185.27.181.36) Quit (Quit: ZNC - http://znc.in)
[9:33] * jklare (~jklare@185.27.181.36) has joined #ceph
[9:36] * tuxcraft1r (~jelle@ebony.powercraft.nl) Quit (Ping timeout: 480 seconds)
[9:40] * branto1 (~branto@178.253.162.116) Quit (Ping timeout: 480 seconds)
[9:41] * Hidendra (~Tumm@exit0.radia.tor-relays.net) Quit ()
[9:45] * tuxcrafter (~jelle@ebony.powercraft.nl) has joined #ceph
[9:48] * TMM (~hp@dhcp-077-248-009-229.chello.nl) Quit (Quit: Ex-Chat)
[9:51] * Concubidated (~cube@c-73-12-218-131.hsd1.ca.comcast.net) has joined #ceph
[9:52] * branto (~branto@178.253.162.116) has joined #ceph
[10:02] * branto1 (~branto@178.253.162.116) has joined #ceph
[10:06] * DanFoster (~Daniel@2a00:1ee0:3:1337:6438:5ce:62d5:b7d0) has joined #ceph
[10:08] * kefu (~kefu@114.92.125.128) has joined #ceph
[10:09] * branto (~branto@178.253.162.116) Quit (Ping timeout: 480 seconds)
[10:12] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[10:13] * kefu (~kefu@114.92.125.128) has joined #ceph
[10:13] * branto1 (~branto@178.253.162.116) Quit (Quit: Leaving.)
[10:14] * branto (~branto@178.253.162.116) has joined #ceph
[10:16] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[10:17] * kefu (~kefu@114.92.125.128) has joined #ceph
[10:17] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:22] * lmb (~Lars@2a02:8109:8100:1d2c:2ad2:44ff:fedf:3318) has joined #ceph
[10:27] * rraja (~rraja@125.16.34.66) has joined #ceph
[10:28] * rraja (~rraja@125.16.34.66) Quit ()
[10:28] * rraja (~rraja@125.16.34.66) has joined #ceph
[10:29] * branto1 (~branto@178.253.162.116) has joined #ceph
[10:31] * TheDoudou_a (~KeeperOfT@185.65.134.75) has joined #ceph
[10:34] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:35] * branto (~branto@178.253.162.116) Quit (Ping timeout: 480 seconds)
[10:42] * briner (~briner@2001:620:600:1000:38d3:f52a:5239:12f3) Quit (Remote host closed the connection)
[10:42] * briner (~briner@2001:620:600:1000:38d3:f52a:5239:12f3) has joined #ceph
[10:42] * TMM (~hp@185.5.121.201) has joined #ceph
[10:44] * nigwil (~Oz@li1416-21.members.linode.com) Quit (Quit: leaving)
[10:44] * nigwil (~Oz@li1416-21.members.linode.com) has joined #ceph
[10:52] * krogon (~krogon@irdmzpr01-ext.ir.intel.com) Quit (Read error: Connection reset by peer)
[10:53] * krogon (~krogon@irdmzpr01-ext.ir.intel.com) has joined #ceph
[10:55] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[10:57] * rendar (~I@95.238.176.98) has joined #ceph
[10:57] * lixiaoy1 (~lixiaoy1@shzdmzpr02-ext.sh.intel.com) Quit (Remote host closed the connection)
[10:58] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[10:59] * branto1 (~branto@178.253.162.116) Quit (Ping timeout: 480 seconds)
[11:01] * TheDoudou_a (~KeeperOfT@185.65.134.75) Quit ()
[11:06] * branto (~branto@178.253.162.116) has joined #ceph
[11:06] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[11:09] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:b52e:d52c:63df:d5a8) has joined #ceph
[11:29] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[11:32] * branto (~branto@178.253.162.116) Quit (Ping timeout: 480 seconds)
[11:36] * rdas (~rdas@121.244.87.113) has joined #ceph
[11:39] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Quit: Ex-Chat)
[11:39] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[11:39] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[11:43] * branto (~branto@178.253.162.116) has joined #ceph
[11:50] * TheDoudou_a (~AG_Clinto@108.61.122.154) has joined #ceph
[11:58] * branto (~branto@178.253.162.116) Quit (Ping timeout: 480 seconds)
[11:59] * analbeard (~shw@host86-142-132-208.range86-142.btcentralplus.com) Quit (Quit: Leaving.)
[11:59] * branto (~branto@178.253.162.116) has joined #ceph
[12:04] * fastlife2042 (~fastlife2@mta.comparegroup.eu) Quit ()
[12:04] * gmoro (~guilherme@193.120.208.221) has joined #ceph
[12:07] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[12:07] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[12:16] * EinstCra_ (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:20] * TheDoudou_a (~AG_Clinto@108.61.122.154) Quit ()
[12:22] * gfidente (~gfidente@0001ef4b.user.oftc.net) has joined #ceph
[12:27] * branto (~branto@178.253.162.116) Quit (Ping timeout: 480 seconds)
[12:27] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[12:28] * branto (~branto@178.253.162.116) has joined #ceph
[12:47] <Kvisle> I can see that ceph 10.2.3 packages are visible in our repos now, but I haven't seen any release notes anywhere
[12:47] <Kvisle> is it really-really-new, or am I missing something?
[12:47] * karnan (~karnan@125.16.34.66) Quit (Remote host closed the connection)
[12:48] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[12:49] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[12:53] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[12:54] * branto (~branto@178.253.162.116) Quit (Ping timeout: 480 seconds)
[12:59] * johnavp1989 (~jpetrini@pool-100-34-191-134.phlapa.fios.verizon.net) has joined #ceph
[12:59] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[13:08] * branto (~branto@178.253.162.116) has joined #ceph
[13:16] * measter (~cryptk@178-175-128-50.static.host) has joined #ceph
[13:28] * salwasser (~Adium@2601:197:101:5cc1:7871:2892:7a31:a78d) has joined #ceph
[13:29] * salwasser (~Adium@2601:197:101:5cc1:7871:2892:7a31:a78d) Quit ()
[13:32] * bloatyfloat (~bloatyflo@46.37.172.253.srvlist.ukfast.net) Quit (Ping timeout: 480 seconds)
[13:32] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[13:35] * branto (~branto@178.253.162.116) Quit (Ping timeout: 480 seconds)
[13:36] * branto (~branto@178.253.162.116) has joined #ceph
[13:42] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[13:46] * measter (~cryptk@178-175-128-50.static.host) Quit ()
[13:46] * click1 (~lobstar@108.61.122.154) has joined #ceph
[14:02] * icey (~Chris@0001bbad.user.oftc.net) Quit (Quit: icey)
[14:03] * analbeard (~shw@support.memset.com) has joined #ceph
[14:04] * icey (~Chris@pool-71-162-145-72.phlapa.fios.verizon.net) has joined #ceph
[14:07] * icey (~Chris@0001bbad.user.oftc.net) Quit (Remote host closed the connection)
[14:07] * dosaboy_ (~dosaboy@host86-185-230-163.range86-185.btcentralplus.com) has joined #ceph
[14:07] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) has joined #ceph
[14:08] * icey (~Chris@pool-71-162-145-72.phlapa.fios.verizon.net) has joined #ceph
[14:08] * icey (~Chris@0001bbad.user.oftc.net) Quit (Remote host closed the connection)
[14:08] * cnf (~cnf@d5152daf0.static.telenet.be) has joined #ceph
[14:09] * dosaboy_ (~dosaboy@host86-185-230-163.range86-185.btcentralplus.com) Quit ()
[14:10] * icey (~Chris@pool-71-162-145-72.phlapa.fios.verizon.net) has joined #ceph
[14:14] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (Quit: leaving)
[14:14] * johnavp1989 (~jpetrini@pool-100-34-191-134.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[14:16] * click1 (~lobstar@108.61.122.154) Quit ()
[14:19] * cnf (~cnf@d5152daf0.static.telenet.be) Quit (Remote host closed the connection)
[14:20] * cnf (~cnf@81.82.218.240) has joined #ceph
[14:21] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[14:23] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) Quit (Quit: valeech)
[14:28] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[14:28] * b0e1 (~aledermue@fw.netways.de) has joined #ceph
[14:29] * b0e (~aledermue@213.95.25.82) Quit (Ping timeout: 480 seconds)
[14:29] * bniver (~bniver@nat-pool-bos-u.redhat.com) has joined #ceph
[14:32] * rwheeler (~rwheeler@bzq-84-111-170-30.red.bezeqint.net) has joined #ceph
[14:33] * owlbot (~supybot@pct-empresas-50.uc3m.es) has joined #ceph
[14:35] * bara_ (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[14:35] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Read error: Connection reset by peer)
[14:35] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) has joined #ceph
[14:35] * ChanServ sets mode +o nhm
[14:38] * Racpatel (~Racpatel@2601:87:3:31e3::77ec) has joined #ceph
[14:38] * racpatel1 (~Racpatel@2601:87:3:31e3::77ec) has joined #ceph
[14:44] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:49] * cnf (~cnf@81.82.218.240) Quit (Quit: cnf)
[14:51] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[14:53] * b0e1 (~aledermue@fw.netways.de) Quit (Ping timeout: 480 seconds)
[14:54] * nilez (~nilez@155.94.248.194) Quit (Ping timeout: 480 seconds)
[14:55] * nilez (~nilez@209.95.50.118) has joined #ceph
[14:57] * Thanos (~Thanos@host-212-204-110-157.customer.m-online.net) has joined #ceph
[14:58] <Thanos> Hello everyone. I would like to use CEPH along with IoT devices. Probably want to make some benchmarks with very small files. Has anyone done this topic before? Any advice?
[15:06] * b0e (~aledermue@213.95.25.82) has joined #ceph
[15:06] * valeech (~valeech@wsip-98-175-102-67.dc.dc.cox.net) has joined #ceph
[15:09] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[15:09] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:13] * Kioob (~Kioob@cxr69-6-82-236-108-177.fbx.proxad.net) has joined #ceph
[15:14] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[15:14] * mq (~oftc-webi@24.139.73.106) has joined #ceph
[15:15] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[15:15] <mq> I can user CephFS to mount this in multiple host sharing the same data ?
[15:16] <rkeene> Yes
[15:16] <mq> Is recommend to production ?
[15:17] <rkeene> CephFS is production as of Jewel (10.2)
[15:17] <mq> ok my ceph is jewel, my crush profile is hammer
[15:17] <mq> no problem with this ?
[15:18] <doppelgrau> Thanos: can you explain the usecase a bit more? distributed ceph over unreliable links and wan, backend for zentral services ...
[15:18] <rkeene> mq, No problem with it as long as "ceph status" says HEALTH_OK
[15:18] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[15:18] <mq> thanks
[15:19] * b0e (~aledermue@213.95.25.82) has joined #ceph
[15:20] <mq> you have tutorial of information site to implement cephfs and mount this in the client ?
[15:20] <mq> step by step
[15:20] <mq> or *
[15:20] <Thanos> my main aim is to combine CEPH with IoT
[15:20] <peetaur2> there's some stuff to read to use it safely http://docs.ceph.com/docs/jewel/cephfs/early-adopters/ which basically says don't enable things from http://docs.ceph.com/docs/master/cephfs/experimental-features/
[15:21] <peetaur2> ^ mq
[15:21] <Thanos> So i would like to check how CEPH behaves by having an influx of many small files.
[15:22] <Thanos> it is a research project, I do not have a very specific aim at the moment
[15:22] <rkeene> There is probably there some documentation on the Ceph webpage, but I can give you a brief overview. 1. Setup monitor node(s) -- atleast 1, probably 3; 2. Setup Ceph ODS's (1 ODS == 1 physical disk, normally), atleast one; 3. Setup Ceph MDS/FS (name changed in 10.2, but I think it's still called "mds" in most places); 4. Create a Ceph FS; 5. Mount it up
[15:23] <Thanos> CEPH seems to be a reliable storage platform, but I would like to see how it behaves with a constant influx of many small files from IoT sensors
[15:23] <rkeene> Thanos, The performance will vary wildly based on the number of OSDs
[15:23] <Thanos> well, how about me doing some research on that?
[15:24] <mq> Thanks you very much peertaur and rkeene
[15:24] <rkeene> Thanos, Go for it
[15:24] * notarima (~blank@tor-exit.squirrel.theremailer.net) has joined #ceph
[15:24] <etienneme> Thanos: it's file that you will edit?
[15:24] <Thanos> a graph that shows how is performance affected by the number of OSD's
[15:24] <Thanos> etienneme, I did not understand the question
[15:25] <etienneme> You will create a lot of tiny files, will you edit those files?
[15:25] <Thanos> I would say no. It is supposed to be sensor data, why would I edit it
[15:26] <etienneme> don't know :) but then you may want to also bench rados
[15:26] <doppelgrau> Thanos: using s3, cephfs or librados directly
[15:26] <Be-El> Thanos: ceph itself does not handle files, you'll need cephfs for that. or use the lower level rados layer
[15:27] <Thanos> how about using the object storage Be-El?
[15:27] <doppelgrau> Thanos: the first two had some performance implications if the number of files in a folder/bucket gets too large, with rados there is only the constant overhead creating a new object
[15:27] * Concubidated (~cube@c-73-12-218-131.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[15:28] <doppelgrau> (and with very very very many objects, the lookups might a bit slower, since the depth of the folder hierachie on the osds increases)
[15:29] <Thanos> My main thought is that: I have 10 sensors generating data. These sensors will create very small files, probably every 1-2 seconds. All this data could be stored in a Ceph Cluster.
[15:29] <Be-El> Thanos: object storage introduces overhead, certain drawbacks (see doppelgrau), but on the other hand you can also use different implementations, e.g. amazon s3
[15:29] <Thanos> Does amazon s3 also require a gateway?
[15:30] <Be-El> what size is 'small'?
[15:30] <Thanos> Be-El that is a good question
[15:30] <doppelgrau> Thanos: and an other question: how should these files retrieved?
[15:30] <Thanos> lets say that these files will not have many reads
[15:31] * squizzi (~squizzi@107.13.237.240) has joined #ceph
[15:31] <Thanos> maybe we will keep these files for a couple of years
[15:31] <doppelgrau> Thanos: storing objects is easy, but you need to know which object to retieve if you want access an object => potentially some "lookup table"
[15:32] <Thanos> I do not know at the moment on what basis these files will be accessed. I do not plan to have live analytics with them
[15:32] <Thanos> or even if I do have live analytics, this will happen a step (or two) before going into the CEPH cluster
[15:34] * OODavo1 (~mrapple@46.166.136.162) has joined #ceph
[15:34] <Thanos> I do understand that it is somewhat complex at the moment, but I am in my first steps
[15:34] <Thanos> and I thank you all for your contributions
[15:35] <doppelgrau> Thanos: if you use the object storage, you need to know how to identify the object that you want to retieve
[15:35] * salwasser (~Adium@72.246.3.14) has joined #ceph
[15:35] <Thanos> how can I do that
[15:36] <doppelgrau> iteration over all stored objects is painfully slow (in a reasonable sized cluster witrh small files = days)
[15:36] <Thanos> So it would be better to use CephFS?
[15:37] <doppelgrau> Thanos: depends on your application, storing some dictionaries, having some sort of schema
[15:37] <BranchPredictor> Thanos: how "very small" would be your files?
[15:37] <Thanos> BranchPredictor I do not know yet
[15:37] <Be-El> for really small datasets i would propose to use a key-value store
[15:37] <doppelgrau> Thanos: in the end same problem, you so not want too many files in the same folder ...
[15:37] <BranchPredictor> few bytes? few hundred bytes? few KBs?
[15:38] <Thanos> maybe 1 KB? How big is a file from an IoT sensor?
[15:38] <BranchPredictor> if less than 4kb then you may want to combine few files into one object.
[15:38] <Thanos> How can I do that
[15:38] <BranchPredictor> for example, data from entire 60 second run stored in one file
[15:39] <Thanos> that is actually a good idea
[15:39] <doppelgrau> Thanos: depends on the sensor, a IP-hd cam (IoT device) can deliver a few mb/second
[15:39] <Thanos> the sensors that I have in mind are connected to a machine and they are just stating the machine status
[15:40] <Thanos> so that would create a small text file with just some data. I would guess...1 KB?
[15:40] <BranchPredictor> or less.
[15:40] <Thanos> or less
[15:40] <doppelgrau> basically an identifier, timestamp and a few values? In that case I'd say some backend to aggregate those would be a good idea
[15:40] <Thanos> by the way, is there any similar work done until now
[15:42] <doppelgrau> Thanos: maybe some approaches to analyse logfiles might be interessting, logstash, graylog and so on
[15:42] <Thanos> OK I will write that
[15:42] <Be-El> or librrd with some own staging fucntionality
[15:42] <doppelgrau> Thanos: many devices feeding data with timestamps and few values
[15:42] <Thanos> any similar work?
[15:42] <mq> jfontan
[15:46] <Thanos> I will take into account all your contributions, thank you
[15:54] * notarima (~blank@tor-exit.squirrel.theremailer.net) Quit ()
[15:54] * wes_dillingham (~wes_dilli@140.247.242.44) has joined #ceph
[15:57] * rmart04 (~rmart04@support.memset.com) has joined #ceph
[16:03] * rwheeler (~rwheeler@bzq-84-111-170-30.red.bezeqint.net) Quit (Quit: Leaving)
[16:04] * OODavo1 (~mrapple@46.166.136.162) Quit ()
[16:17] * thomnico (~thomnico@2a01:e35:8b41:120:6d2c:6867:3dd9:9b22) has joined #ceph
[16:19] * fsimonce (~simon@host98-71-dynamic.1-87-r.retail.telecomitalia.it) has joined #ceph
[16:20] * fsimonce (~simon@host98-71-dynamic.1-87-r.retail.telecomitalia.it) Quit ()
[16:21] * gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) has joined #ceph
[16:23] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[16:25] * fsimonce (~simon@host98-71-dynamic.1-87-r.retail.telecomitalia.it) has joined #ceph
[16:31] * ChuckMe (~ChuckMe@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[16:32] <walcubi> I see there's 10.2.3 in the debian repos, but no changelog/release notes in ceph.com.
[16:35] * gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) Quit (Quit: Textual IRC Client: www.textualapp.com)
[16:35] * ChuckMe (~ChuckMe@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Read error: Connection reset by peer)
[16:35] * kristen (~kristen@134.134.139.74) has joined #ceph
[16:36] * gila (~gila@5ED4FE92.cm-7-5d.dynamic.ziggo.nl) has joined #ceph
[16:37] * thomnico (~thomnico@2a01:e35:8b41:120:6d2c:6867:3dd9:9b22) Quit (Ping timeout: 480 seconds)
[16:37] * yanzheng1 (~zhyan@125.70.21.68) Quit (Quit: This computer has gone to sleep)
[16:38] * valeech_ (~valeech@wsip-98-175-102-67.dc.dc.cox.net) has joined #ceph
[16:39] * valeech (~valeech@wsip-98-175-102-67.dc.dc.cox.net) Quit (Ping timeout: 480 seconds)
[16:39] * valeech_ is now known as valeech
[16:39] * kefu (~kefu@114.92.125.128) has joined #ceph
[16:39] * ChuckMe (~ChuckMe@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[16:40] * Sliker (~Peaced@46.166.188.211) has joined #ceph
[16:43] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[16:43] * ChuckMe (~ChuckMe@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Read error: Connection reset by peer)
[16:44] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (Quit: leaving)
[16:45] * Thanos (~Thanos@host-212-204-110-157.customer.m-online.net) Quit ()
[16:45] * valeech (~valeech@wsip-98-175-102-67.dc.dc.cox.net) Quit (Quit: valeech)
[16:46] * dosaboy (~dosaboy@host86-185-230-163.range86-185.btcentralplus.com) has joined #ceph
[16:46] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:46] * dosaboy (~dosaboy@host86-185-230-163.range86-185.btcentralplus.com) Quit ()
[17:00] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[17:02] <mq> #RedHat
[17:02] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:03] * wushudoin (~wushudoin@2601:646:8200:c9f0:2ab2:bdff:fe0b:a6ee) has joined #ceph
[17:06] * Sliker (~Peaced@46.166.188.211) Quit ()
[17:08] * Kingrat (~shiny@2605:6000:1526:4063:d16a:5007:d070:aec1) Quit (Remote host closed the connection)
[17:09] * xarses (~xarses@64.124.158.3) has joined #ceph
[17:09] * xarses (~xarses@64.124.158.3) Quit (Remote host closed the connection)
[17:09] * xarses (~xarses@64.124.158.3) has joined #ceph
[17:10] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) Quit (Ping timeout: 480 seconds)
[17:14] * ntpttr_ (~ntpttr@134.134.139.83) has joined #ceph
[17:15] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) has joined #ceph
[17:16] * mattbenjamin1 (~mbenjamin@12.118.3.106) has joined #ceph
[17:17] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[17:19] * dosaboy (~dosaboy@host86-185-230-163.range86-185.btcentralplus.com) has joined #ceph
[17:19] * ChuckMe (~ChuckMe@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[17:19] <frickler> wido: if you want to meet some bcache folk, you might want to check over in #bcache, even py1hon does some idling there
[17:22] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[17:24] * jowilkin (~jowilkin@184-23-213-254.fiber.dynamic.sonic.net) has joined #ceph
[17:24] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[17:25] * kefu (~kefu@114.92.125.128) has joined #ceph
[17:30] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:32] * hasues (~hasues@204.78.58.43) has joined #ceph
[17:33] * hasues (~hasues@204.78.58.43) Quit ()
[17:35] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[17:35] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[17:36] * kefu (~kefu@114.92.125.128) has joined #ceph
[17:37] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:38] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[17:42] * trociny (~Mikolaj@91.245.75.235) has joined #ceph
[17:49] * ntpttr_ (~ntpttr@134.134.139.83) Quit (Remote host closed the connection)
[17:52] * hyperbaba (~hyperbaba@80.74.175.250) has joined #ceph
[17:52] <hyperbaba> Hi there,
[17:54] * rdas (~rdas@121.244.87.113) Quit (Quit: Leaving)
[17:55] <hyperbaba> I have a broken ceph cluster (1 incomplete pg) which i can't repair. What I want to do is to get the objects out of the radosgw. The problem is that when request comes to some missing /broken object it creates a blocked requests and i can't continue. Is there a way to get the data out or kill blocked requests?
[17:55] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[17:55] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) has joined #ceph
[17:56] * kefu (~kefu@114.92.125.128) has joined #ceph
[17:56] * dosaboy (~dosaboy@host86-185-230-163.range86-185.btcentralplus.com) Quit (Quit: leaving)
[17:56] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[17:57] * Goodi (~Hannu@office.proact.fi) Quit (Quit: This computer has gone to sleep)
[17:58] * linuxkidd (~linuxkidd@ip70-189-214-97.lv.lv.cox.net) has joined #ceph
[18:00] * Skaag (~lunix@65.200.54.234) has joined #ceph
[18:01] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[18:02] * kefu is now known as kefu|afk
[18:03] * xinli (~charleyst@32.97.110.54) has joined #ceph
[18:08] * Kioob (~Kioob@cxr69-6-82-236-108-177.fbx.proxad.net) Quit (Quit: Leaving.)
[18:13] * gfidente (~gfidente@0001ef4b.user.oftc.net) Quit (Quit: bye)
[18:13] <doppelgrau> hyperbaba: force recreate the pg
[18:14] <doppelgrau> hyperbaba: should fix the PG (with dataloss), perhaps just marking old osds as lost might help too
[18:15] <doppelgrau> hyperbaba: but do not know how it affects the s3 part (gues some error for the objects in the index but not on disk)
[18:17] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:24] * ntpttr_ (~ntpttr@134.134.139.76) has joined #ceph
[18:25] * kefu|afk (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:25] * ntpttr_ (~ntpttr@134.134.139.76) Quit ()
[18:25] * hyperbaba (~hyperbaba@80.74.175.250) Quit (Ping timeout: 480 seconds)
[18:31] * rmart04 (~rmart04@support.memset.com) Quit (Ping timeout: 480 seconds)
[18:37] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) has joined #ceph
[18:37] * xinli (~charleyst@32.97.110.54) Quit (Remote host closed the connection)
[18:37] * branto (~branto@178.253.162.116) Quit (Quit: Leaving.)
[18:38] * xinli (~charleyst@32.97.110.54) has joined #ceph
[18:40] * shaunm (~shaunm@50-5-227-85.dynamic.fuse.net) has joined #ceph
[18:41] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) has joined #ceph
[18:42] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) Quit (Quit: Leaving)
[18:43] <mq> Hi I am searching support to ceph ?
[18:46] * ffilz (~ffilz@c-76-115-190-27.hsd1.or.comcast.net) has joined #ceph
[18:46] * xarses (~xarses@64.124.158.3) Quit (Quit: Leaving)
[18:46] * TheSov (~TheSov@108-75-213-57.lightspeed.cicril.sbcglobal.net) has joined #ceph
[18:46] * xarses (~xarses@64.124.158.3) has joined #ceph
[18:46] <TheSov> what happens if the cluster network goes down?
[18:46] * bara_ (~bara@nat-pool-brq-t.redhat.com) Quit (Remote host closed the connection)
[18:51] * hyperbaba (~hyperbaba@80.74.175.250) has joined #ceph
[18:52] <hyperbaba> dopelgrau: how can i do that?
[18:53] <hyperbaba> doppelgrau: how can i do that?
[18:54] <hyperbaba> doppelgrau:i am not that good with ceph
[18:56] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) has joined #ceph
[18:58] <blizzow> If I'm running some OSD nodes on VMs in esxi with raid controllers, where and what should my cache settings be for OSDs? Should I have cache enabled on the hypervisor, and none on the OSD node VM, or the other way around?
[19:00] * jarrpa (~jarrpa@167.220.99.146) has joined #ceph
[19:00] * rraja (~rraja@125.16.34.66) Quit (Ping timeout: 480 seconds)
[19:02] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) Quit (Quit: Leaving.)
[19:03] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[19:04] <hyperbaba> doppelgrau:Found it. But now the state is creating
[19:07] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) has joined #ceph
[19:12] * xarses_ (~xarses@64.124.158.3) has joined #ceph
[19:12] * xarses (~xarses@64.124.158.3) Quit (Read error: Connection reset by peer)
[19:13] * DanFoster (~Daniel@2a00:1ee0:3:1337:6438:5ce:62d5:b7d0) Quit (Quit: Leaving)
[19:13] <doppelgrau> hyperbaba: all OSDS up and in?
[19:14] * med is now known as george12
[19:14] * george12 is now known as med
[19:16] <hyperbaba> doppelgrau: yes
[19:17] <wes_dillingham> Any known issues with ceph and 4.7 kernel, planning on upgrading kernel today
[19:19] * bloatyfloat (~bloatyflo@46.37.172.253.srvlist.ukfast.net) has joined #ceph
[19:25] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:26] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[19:27] <evilrob> so... our radosgw user DB seems to be corrupt. users that used to work, no longer do. we only have a couple that matter. can we just blow it all away?
[19:28] * rmart04 (~rmart04@109.153.193.5) has joined #ceph
[19:29] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) Quit (Remote host closed the connection)
[19:30] * hasues (~hasues@204.78.58.43) has joined #ceph
[19:30] * hasues (~hasues@204.78.58.43) has left #ceph
[19:31] * Concubidated (~cube@68.140.239.164) has joined #ceph
[19:32] <evilrob> can I just delete and recreate the .users.* pools?
[19:34] * davidzlap (~Adium@rrcs-74-87-213-28.west.biz.rr.com) Quit (Quit: Leaving.)
[19:35] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[19:35] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[19:38] <doppelgrau> hyperbaba: strange
[19:39] <hyperbaba> doppelgrau: i've tried to check the bucket and fix it with --check-object ...but it finished too fast for a 1.5M object bucket
[19:39] <hyperbaba> doppelgrau: i just got [] as a result
[19:40] * TheSov (~TheSov@108-75-213-57.lightspeed.cicril.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[19:41] * TheSov (~TheSov@108-75-213-57.lightspeed.cicril.sbcglobal.net) has joined #ceph
[19:42] <TheSov> hey guys does anyone know what happens when the cluster network goes down/
[19:43] <SamYaple> TheSov: badness!
[19:43] <TheSov> but what exactly
[19:43] <SamYaple> TheSov: when mine went down it was an osd storm of online/offline
[19:43] <T1> same as loosing all monitors
[19:44] <SamYaple> the osds on a single box could see eachother, but the hearbeats go over the cluster network so they kept marking eachother down
[19:44] <TheSov> hmmmm
[19:44] * bniver (~bniver@nat-pool-bos-u.redhat.com) Quit (Remote host closed the connection)
[19:44] <TheSov> so theirs no graceful way to do this
[19:44] <SamYaple> this was back in Firefly though, behaviour might have changed
[19:44] <T1> and given that the MONs are not available the CRUSh map would be unavailable and no data could be written og read
[19:44] <T1> or read even
[19:45] <SamYaple> T1: the mons don't talk on the cluster network
[19:45] <SamYaple> T1: or rather, not primarily
[19:45] <evilrob> hmmm... renamed all my .users* pools. created new ones. didn't change anything
[19:45] <SamYaple> they keep quorum over the public network i bleive
[19:45] <TheSov> so i hae a small ceph cluster at home, i guess i can unplug the cluster network and see what happens
[19:45] <T1> SamYaple: doesn't the OSDs talk to the MONs over the cluster network?
[19:45] <TheSov> it runs jewel
[19:45] * dosaboy_ (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[19:46] <TheSov> no all osd to mon communcation is via public network
[19:46] <TheSov> you should not have mons in the cluster network
[19:46] <SamYaple> T1: i cant quite rmemeber if any heartbeats for mons go over cluster. that would be the only thing
[19:46] * dosaboy_ (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (Remote host closed the connection)
[19:46] <TheSov> heartbeats to /from osds do go over cluster network
[19:46] <SamYaple> TheSov: i want to say there was some mon traffic on cluster network, but i cant remember. the majority was not on cluster network
[19:46] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (Read error: Connection reset by peer)
[19:47] <TheSov> there is not because my mons have no connection to the cluster network
[19:47] <TheSov> how could their be any communication of mons on the cluster network?
[19:47] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[19:47] <TheSov> only my osd hosts have a cluster network nic
[19:48] <TheSov> wait you have mons on your cluster network?
[19:50] <TheSov> http://docs.ceph.com/docs/jewel/rados/configuration/network-config-ref/ look
[19:50] <TheSov> its in the graphic right there, no mons on cluster network
[19:52] <TheSov> the reason i ask is im about to do my biggest deployment ever, 90PB each system only has 2 10gig nics, i figure one for public and 1 for cluster
[19:52] <SamYaple> TheSov: yea that sounds right. I don't have my mons bound to anything on the cluster network (though just due ot the configuration, they have access to cluster network)
[19:52] * vasu (~vasu@38.140.108.2) has joined #ceph
[19:52] <SamYaple> TheSov: oh yea, don't do that. that will give you huge headaches
[19:52] <SamYaple> i did that in a lab and did some failure testing, huge problem
[19:53] <TheSov> what was the problem exactly the heartbeat stuff?
[19:53] <TheSov> so my public network is a ring topoloy
[19:53] <TheSov> topology
[19:53] <SamYaple> so if the cluster network went down, the osds on the host with the cluster network down would mark all the other osds down, and the other osds would make all the cluster-network-down host osds down
[19:53] <TheSov> but my cluster network is just 2 switches
[19:53] <SamYaple> just a marking down storm
[19:54] <TheSov> sorry tis 3 swithes
[19:54] <TheSov> not 2
[19:54] <SamYaple> mind you, you should really test this. this was last tested in Firefly for me
[19:55] <TheSov> i find it odd that theres no gracefull failure for cluster network. there is for public...
[19:55] <SamYaple> TheSov: maybe things have changed since Firefly? its been over 2 years
[19:56] <SamYaple> TheSov: only semi related, i just did something related to this at my house for my small ceph cluster tehre https://yaple.net/2016/09/21/bonding-bridging-and-port-density/
[19:57] <TheSov> yuck bondig...
[19:57] <TheSov> i hate it
[19:57] <TheSov> but its got its places
[19:58] <TheSov> so the way i got things running now each osd node has 1 10 gig for public and 1 10 gig for cluster
[19:58] <TheSov> twinax
[19:59] <TheSov> the public is a 8 switch ring
[19:59] <TheSov> and the cluster is a 3 switch ring
[20:00] <TheSov> if i need to update the cluster network switches that means 1/3rd of all osds goes down at one time?
[20:02] * rakeshgm (~rakesh@38.140.108.2) has joined #ceph
[20:02] * rmart04 (~rmart04@109.153.193.5) Quit (Read error: Connection reset by peer)
[20:03] * linuxkidd (~linuxkidd@ip70-189-214-97.lv.lv.cox.net) Quit (Remote host closed the connection)
[20:03] * rmart04 (~rmart04@host109-153-193-5.range109-153.btcentralplus.com) has joined #ceph
[20:03] * linuxkidd (~linuxkidd@ip70-189-214-97.lv.lv.cox.net) has joined #ceph
[20:06] <[arx]> i would assume so
[20:09] <SamYaple> TheSov: youll want to admin mark those down ahead of time, or noout/nodown so you dont have a storm of osds being marked down
[20:09] <SamYaple> I would recommend marking them down first
[20:09] <SamYaple> otherwise youll have client timeouts
[20:09] * rmart04 (~rmart04@host109-153-193-5.range109-153.btcentralplus.com) Quit (Quit: rmart04)
[20:13] * Be-El (~blinke@nat-router.computational.bio.uni-giessen.de) has left #ceph
[20:16] * linuxkidd (~linuxkidd@ip70-189-214-97.lv.lv.cox.net) Quit (Quit: Leaving)
[20:17] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) has joined #ceph
[20:29] * vikhyat (~vumrao@103.3.43.75) has joined #ceph
[20:30] * Tenk (~Tonux@108.61.122.88) has joined #ceph
[20:31] * sudocat1 (~dibarra@192.185.1.20) has joined #ceph
[20:31] * davidz (~davidz@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[20:32] <mq> When I mount a cephfs in a client, I see all storage available of the ceph cluster, how I control the space of the mounted cephfs ?
[20:32] <mq> and where is saved this data ?
[20:33] <rkeene> It's saved in the two pools you specified when you created the filesystem, usually called "data" and "metadata"
[20:33] * derjohn_mob (~aj@46.189.28.56) Quit (Ping timeout: 480 seconds)
[20:35] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[20:37] <mq> Yes I created these pools, my quiery is if I have a client using mounted a rbd and a cephfs both using space of the cluster storage, how to I manage the storage in the cephfs mounted ?
[20:38] <mq> query *
[20:38] <ktdreyer> Kvisle: v10.2.3 is new, yeah. I'm not sure who writes that post nowdays. sage or abhishek ?
[20:39] * salwasser (~Adium@72.246.3.14) Quit (Quit: Leaving.)
[20:39] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[20:40] * linuxkidd (~linuxkidd@ip70-189-214-97.lv.lv.cox.net) has joined #ceph
[20:40] * shaunm (~shaunm@50-5-227-85.dynamic.fuse.net) Quit (Ping timeout: 480 seconds)
[20:40] <ktdreyer> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/012872.html leads me to think abhishek
[20:40] <ktdreyer> I don't know his IRC nick though
[20:42] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[20:47] * xinli (~charleyst@32.97.110.54) Quit (Ping timeout: 480 seconds)
[20:49] * haplo37 (~haplo37@199.91.185.156) Quit (Ping timeout: 480 seconds)
[20:54] * [arx] is now known as llua
[20:54] * tomaw_ is now known as tomaw
[20:56] * TMM (~hp@dhcp-077-248-009-229.chello.nl) has joined #ceph
[21:00] * Tenk (~Tonux@108.61.122.88) Quit ()
[21:09] * mykola (~Mikolaj@91.245.74.118) has joined #ceph
[21:10] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[21:11] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[21:14] * trociny (~Mikolaj@91.245.75.235) Quit (Ping timeout: 480 seconds)
[21:15] * rakeshgm (~rakesh@38.140.108.2) Quit (Ping timeout: 480 seconds)
[21:16] * vasu (~vasu@38.140.108.2) Quit (Ping timeout: 480 seconds)
[21:20] * vikhyat (~vumrao@103.3.43.75) Quit (Quit: Leaving)
[21:24] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) has joined #ceph
[21:27] * rakeshgm (~rakesh@38.140.108.5) has joined #ceph
[21:29] * vasu (~vasu@38.140.108.5) has joined #ceph
[21:32] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[21:36] <mq> how to mount the cephfs using subtree ?
[21:39] * rakeshgm (~rakesh@38.140.108.5) Quit (Quit: Peace :))
[21:40] * sese_ (~aleksag@93.115.95.206) has joined #ceph
[21:48] * treenerd_ (~gsulzberg@cpe90-146-148-47.liwest.at) Quit (Quit: treenerd_)
[22:00] * fdssd5sd5sd (~ds5a5a5@CableLink-187-160-109-223.PCs.InterCable.net) has joined #ceph
[22:00] * fdssd5sd5sd (~ds5a5a5@CableLink-187-160-109-223.PCs.InterCable.net) has left #ceph
[22:01] * rendar (~I@95.238.176.98) Quit (Ping timeout: 480 seconds)
[22:01] * BManojlovic (~steki@cable-94-189-166-198.dynamic.sbb.rs) has joined #ceph
[22:06] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[22:10] * sese_ (~aleksag@93.115.95.206) Quit ()
[22:16] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[22:26] * rendar (~I@95.238.176.98) has joined #ceph
[22:27] * vicente (~vicente@111-241-37-132.dynamic.hinet.net) Quit (Ping timeout: 480 seconds)
[22:27] * vicente (~vicente@1-161-184-59.dynamic.hinet.net) has joined #ceph
[22:29] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[22:30] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) has joined #ceph
[22:35] * Jeffrey4l_ (~Jeffrey@61.55.64.98) Quit (Read error: Connection reset by peer)
[22:36] * Jeffrey4l_ (~Jeffrey@61.55.64.98) has joined #ceph
[22:36] * sudocat1 (~dibarra@192.185.1.20) Quit (Quit: Leaving.)
[22:36] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[22:37] * wes_dillingham_ (~wes_dilli@65.112.8.203) has joined #ceph
[22:38] * wes_dillingham_ (~wes_dilli@65.112.8.203) Quit ()
[22:38] * mykola (~Mikolaj@91.245.74.118) Quit (Quit: away)
[22:39] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:b52e:d52c:63df:d5a8) Quit (Ping timeout: 480 seconds)
[22:40] * wes_dillingham (~wes_dilli@140.247.242.44) Quit (Ping timeout: 480 seconds)
[22:41] * xinli (~charleyst@32.97.110.57) has joined #ceph
[22:42] * Concubidated (~cube@68.140.239.164) Quit (Quit: Leaving.)
[22:47] * erwan_taf (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Ping timeout: 480 seconds)
[22:49] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Read error: Connection reset by peer)
[22:50] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[22:50] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit ()
[22:55] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[23:01] * haplo37 (~haplo37@199.91.185.156) Quit (Read error: Connection reset by peer)
[23:01] * hyperbaba (~hyperbaba@80.74.175.250) Quit (Remote host closed the connection)
[23:04] * bauruine (~bauruine@2a01:4f8:130:8285:fefe::36) Quit (Quit: ZNC - http://znc.in)
[23:05] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[23:06] * bauruine (~bauruine@2a01:4f8:130:8285:fefe::36) has joined #ceph
[23:07] * stupidnic (~foo@c-73-7-153-223.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[23:08] * linuxkidd (~linuxkidd@ip70-189-214-97.lv.lv.cox.net) Quit (Quit: Leaving)
[23:08] * linuxkidd (~linuxkidd@ip70-189-214-97.lv.lv.cox.net) has joined #ceph
[23:16] * mq (~oftc-webi@24.139.73.106) Quit (Remote host closed the connection)
[23:19] * Concubidated (~cube@68.140.239.164) has joined #ceph
[23:27] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[23:29] * ChuckMe (~ChuckMe@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: ChuckMe)
[23:33] * oliveiradan (~doliveira@137.65.133.10) Quit (Remote host closed the connection)
[23:36] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[23:48] * BManojlovic (~steki@cable-94-189-166-198.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:49] * rakeshgm (~rakesh@38.140.108.5) has joined #ceph
[23:49] * Dragonshadow (~TehZomB@5.153.234.90) has joined #ceph
[23:51] * mtanski (~mtanski@65.244.82.98) Quit (Quit: mtanski)
[23:52] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[23:53] * cathode (~cathode@50-232-215-114-static.hfc.comcastbusiness.net) has joined #ceph
[23:54] * vasu (~vasu@38.140.108.5) Quit (Ping timeout: 480 seconds)
[23:59] * oliveiradan (~doliveira@137.65.133.10) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.