#ceph IRC Log

Index

IRC Log for 2016-08-08

Timestamps are in GMT/BST.

[0:12] * Freddy (~Kaervan@61TAAA5U7.tor-irc.dnsbl.oftc.net) Quit ()
[0:17] * [0x4A6F]_ (~ident@p4FC27EF0.dip0.t-ipconnect.de) has joined #ceph
[0:20] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:20] * [0x4A6F]_ is now known as [0x4A6F]
[0:21] * QuantumBeep (~cooey@torsrva.snydernet.net) has joined #ceph
[0:51] * QuantumBeep (~cooey@5AEAAAT2F.tor-irc.dnsbl.oftc.net) Quit ()
[0:56] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[0:59] * rendar (~I@host199-176-dynamic.7-87-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[1:08] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[1:10] * Nacer (~Nacer@176.31.89.99) Quit (Remote host closed the connection)
[1:14] <Anticimex> hmm. trying to set up a single host ceph cluster
[1:14] <Anticimex> failing on e.g http://tracker.ceph.com/issues/16379 / http://tracker.ceph.com/issues/16477
[1:14] <Anticimex> jewel on debian
[1:15] <Anticimex> wondering what's up there
[1:22] <Anticimex> seems to be stuck probing (followed http://docs.ceph.com/docs/master/install/manual-deployment/ )
[1:24] <Anticimex> ok, mon name got weird, fixed
[1:38] * srk_ (~Siva@2605:6000:ed04:ce00:f5c3:b973:7f8b:a98d) has joined #ceph
[1:39] * Dragonshadow (~puvo@0x667.crypt.gy) has joined #ceph
[1:45] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[1:48] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[1:48] * oms101 (~oms101@p20030057EA5E0C00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:52] * srk_ (~Siva@2605:6000:ed04:ce00:f5c3:b973:7f8b:a98d) Quit (Ping timeout: 480 seconds)
[1:56] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[1:57] * oms101 (~oms101@p20030057EA02AD00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:00] * truan-wang (~truanwang@220.248.17.34) has joined #ceph
[2:09] * Dragonshadow (~puvo@26XAAAWH9.tor-irc.dnsbl.oftc.net) Quit ()
[2:09] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[2:10] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:12] * doppelgrau__ (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau__)
[2:33] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[2:50] * truan-wang (~truanwang@220.248.17.34) Quit (Remote host closed the connection)
[2:50] * AndroUser2 (~androirc@107.170.0.159) has joined #ceph
[2:53] * AndroUser2 (~androirc@107.170.0.159) Quit (Remote host closed the connection)
[2:53] * AndroUser2 (~androirc@107.170.0.159) has joined #ceph
[2:54] * inevity (~androirc@107.170.0.159) Quit (Ping timeout: 480 seconds)
[2:57] * AndroUser2 (~androirc@107.170.0.159) Quit (Remote host closed the connection)
[3:03] * Grimhound (~Esvandiar@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[3:07] * truan-wang (~truanwang@58.247.8.186) has joined #ceph
[3:30] * aj__ (~aj@x590cd87d.dyn.telefonica.de) has joined #ceph
[3:31] * Jeffrey4l_ (~Jeffrey@110.252.60.190) Quit (Ping timeout: 480 seconds)
[3:31] * dnunez (~dnunez@209-6-91-147.c3-0.smr-ubr1.sbo-smr.ma.cable.rcn.com) has joined #ceph
[3:33] * Grimhound (~Esvandiar@61TAAA5Y8.tor-irc.dnsbl.oftc.net) Quit ()
[3:36] * Jeffrey4l_ (~Jeffrey@110.252.60.190) has joined #ceph
[3:38] * derjohn_mobi (~aj@x4db0e19f.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[3:38] * dnunez (~dnunez@209-6-91-147.c3-0.smr-ubr1.sbo-smr.ma.cable.rcn.com) Quit (Quit: Leaving)
[3:40] * sebastian-w_ (~quassel@212.218.8.138) has joined #ceph
[3:41] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:44] * sebastian-w (~quassel@212.218.8.138) Quit (Ping timeout: 480 seconds)
[3:45] * AndroUser2 (~androirc@107.170.0.159) has joined #ceph
[3:45] * AndroUser2 (~androirc@107.170.0.159) Quit ()
[3:51] * yanzheng (~zhyan@125.70.20.176) has joined #ceph
[4:05] * Jeffrey4l_ (~Jeffrey@110.252.60.190) Quit (Ping timeout: 480 seconds)
[4:06] * Jeffrey4l_ (~Jeffrey@110.252.60.190) has joined #ceph
[4:09] * flisky (~Thunderbi@210.12.157.93) has joined #ceph
[4:12] * flisky (~Thunderbi@210.12.157.93) Quit ()
[4:18] * measter (~richardus@79-236-47-212.rev.cloud.scaleway.com) has joined #ceph
[4:20] * ronrib (~boswortr@45.32.242.135) Quit (Remote host closed the connection)
[4:20] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) Quit (Quit: Leaving)
[4:22] * _28_ria (~kvirc@opfr028.ru) Quit (Ping timeout: 480 seconds)
[4:22] * jfaj_ (~jan@p20030084AF2C12005EC5D4FFFEBB68A4.dip0.t-ipconnect.de) has joined #ceph
[4:23] * _28_ria (~kvirc@opfr028.ru) has joined #ceph
[4:26] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:27] * truan-wang (~truanwang@58.247.8.186) Quit (Ping timeout: 480 seconds)
[4:29] * jfaj (~jan@p4FE4F670.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[4:41] * truan-wang (~truanwang@220.248.17.34) has joined #ceph
[4:44] * haomaiwang (~oftc-webi@61.149.85.206) has joined #ceph
[4:47] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[4:48] * measter (~richardus@5AEAAAT5Y.tor-irc.dnsbl.oftc.net) Quit ()
[4:53] * jwandborg (~Diablodoc@65.19.167.130) has joined #ceph
[4:56] * kefu (~kefu@114.92.96.253) has joined #ceph
[4:57] * wgao (~wgao@106.120.101.38) has joined #ceph
[5:00] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[5:08] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[5:13] * truan-wang_ (~truanwang@58.247.8.186) has joined #ceph
[5:13] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[5:15] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[5:15] * truan-wang (~truanwang@220.248.17.34) Quit (Ping timeout: 480 seconds)
[5:16] * Racpatel (~Racpatel@2601:87:0:24af::53d5) Quit (Ping timeout: 480 seconds)
[5:23] * jwandborg (~Diablodoc@26XAAAWMG.tor-irc.dnsbl.oftc.net) Quit ()
[5:32] * Skyrider (~dug@tor-exit-node.seas.upenn.edu) has joined #ceph
[5:39] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[5:40] * vimal (~vikumar@114.143.165.8) has joined #ceph
[5:40] * Vacuum_ (~Vacuum@88.130.195.58) has joined #ceph
[5:45] * haomaiwang (~oftc-webi@61.149.85.206) Quit (Ping timeout: 480 seconds)
[5:47] * Vacuum__ (~Vacuum@i59F792AB.versanet.de) Quit (Ping timeout: 480 seconds)
[6:00] * truan-wang_ (~truanwang@58.247.8.186) Quit (Remote host closed the connection)
[6:00] * Raboo (~raboo@nl-ams-ubnt01.letit.se) Quit (Remote host closed the connection)
[6:01] * walcubi__ (~walcubi@p5795B41D.dip0.t-ipconnect.de) has joined #ceph
[6:02] * Skyrider (~dug@5AEAAAT7B.tor-irc.dnsbl.oftc.net) Quit ()
[6:04] * vimal (~vikumar@114.143.165.8) Quit (Quit: Leaving)
[6:06] * Plesioth (~redbeast1@tor-exit.gansta93.com) has joined #ceph
[6:08] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[6:08] * walcubi_ (~walcubi@p5795B5F3.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:10] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:14] * valeech (~valeech@pool-108-44-162-111.clppva.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:17] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[6:27] * vimal (~vikumar@121.244.87.116) has joined #ceph
[6:27] * truan-wang (~truanwang@220.248.17.34) has joined #ceph
[6:29] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[6:31] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[6:31] * valeech (~valeech@166.170.32.74) has joined #ceph
[6:34] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[6:36] * Plesioth (~redbeast1@61TAAA52R.tor-irc.dnsbl.oftc.net) Quit ()
[6:43] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[6:45] * Lite (~Jourei@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[7:11] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:14] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[7:15] * Lite (~Jourei@9YSAAA6ZR.tor-irc.dnsbl.oftc.net) Quit ()
[7:16] * aj__ (~aj@x590cd87d.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[7:22] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:24] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:24] * _28_ria (~kvirc@opfr028.ru) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[7:33] * SquallSeeD31 (~Behedwin@26XAAAWO9.tor-irc.dnsbl.oftc.net) has joined #ceph
[7:35] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[7:38] * valeech (~valeech@166.170.32.74) Quit (Read error: Connection reset by peer)
[7:43] * swami1 (~swami@49.38.3.197) has joined #ceph
[7:53] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[8:02] * art_yo|2 (~kvirc@149.126.169.197) has joined #ceph
[8:03] * SquallSeeD31 (~Behedwin@26XAAAWO9.tor-irc.dnsbl.oftc.net) Quit ()
[8:03] * Shnaw (~basicxman@9.tor.exit.babylon.network) has joined #ceph
[8:04] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[8:05] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[8:10] * efirs (~firs@31.173.240.149) has joined #ceph
[8:14] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[8:15] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Killed (NickServ (Too many failed password attempts.)))
[8:15] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[8:22] * aNupoisc (~adnavare@134.134.139.83) has joined #ceph
[8:23] * haomaiwang (~oftc-webi@61.149.85.206) has joined #ceph
[8:24] * art_yo|2 (~kvirc@149.126.169.197) Quit (Read error: Connection reset by peer)
[8:26] * topro_ (~prousa@p578af414.dip0.t-ipconnect.de) has joined #ceph
[8:33] * Shnaw (~basicxman@5AEAAAT9L.tor-irc.dnsbl.oftc.net) Quit ()
[8:46] * doppelgrau_ (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[8:50] * aj__ (~aj@88.128.80.198) has joined #ceph
[8:54] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) has joined #ceph
[8:57] * evelu (~erwan@46.231.131.178) has joined #ceph
[9:02] * doppelgrau_ (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau_)
[9:02] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[9:10] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[9:15] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[9:16] * efirs (~firs@31.173.240.149) Quit (Ping timeout: 480 seconds)
[9:16] * snelly (~cjs@sable.island.nu) Quit (Remote host closed the connection)
[9:19] * snelly (~cjs@sable.island.nu) has joined #ceph
[9:19] * snelly (~cjs@sable.island.nu) has left #ceph
[9:21] * rdas (~rdas@121.244.87.116) has joined #ceph
[9:22] * aNupoisc (~adnavare@134.134.139.83) Quit (Remote host closed the connection)
[9:23] * thomnico (~thomnico@2a01:e35:8b41:120:cd45:6716:e80f:89c7) has joined #ceph
[9:24] * analbeard (~shw@support.memset.com) has joined #ceph
[9:26] * Hemanth (~hkumar_@121.244.87.117) Quit (Quit: Leaving)
[9:29] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[9:31] * fsimonce (~simon@host203-44-dynamic.183-80-r.retail.telecomitalia.it) has joined #ceph
[9:34] * swami2 (~swami@223.227.11.65) has joined #ceph
[9:38] * swami1 (~swami@49.38.3.197) Quit (Ping timeout: 480 seconds)
[9:40] * boolman (boolman@79.138.78.238) has joined #ceph
[9:44] * aj__ (~aj@88.128.80.198) Quit (Ping timeout: 480 seconds)
[9:54] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:55] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[9:57] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[10:01] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[10:08] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[10:13] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) has joined #ceph
[10:15] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[10:16] * art_yo (~kvirc@149.126.169.197) has joined #ceph
[10:17] * aj__ (~aj@fw.gkh-setu.de) has joined #ceph
[10:18] * truan-wang (~truanwang@220.248.17.34) Quit (Ping timeout: 480 seconds)
[10:20] * ira (~ira@121.244.87.117) has joined #ceph
[10:20] * giorgis (~oftc-webi@ppp-94-64-12-25.home.otenet.gr) has joined #ceph
[10:21] <giorgis> hi cephers! I have an emergency issue and would like to get some help
[10:22] <giorgis> does anyone know if it's possible to recover data from an RBD provided by CEPH to OpenStack that was accidentally deleted???
[10:24] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[10:26] * t4nk643 (~oftc-webi@abts-tn-dynamic-026.205.174.122.airtelbroadband.in) has joined #ceph
[10:28] * rendar (~I@host211-181-dynamic.52-79-r.retail.telecomitalia.it) has joined #ceph
[10:30] * flesh (~oftc-webi@static.ip-171-033-130-093.signet.nl) has joined #ceph
[10:34] <t4nk643> hi
[10:34] * aj__ (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[10:35] * kefu (~kefu@114.92.96.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[10:36] * walcubi__ is now known as walcubi
[10:37] * kefu (~kefu@114.92.96.253) has joined #ceph
[10:37] <walcubi> With xfs, throughput has dropped down to 150 ops/s. Still higher than btrfs, but an order of magnitude slower than when it started.
[10:39] <walcubi> Also like btrfs, an OSD spuriously died when reaching around the 18-22 million objects mark.
[10:41] * TMM (~hp@185.5.121.201) has joined #ceph
[10:43] * aj__ (~aj@fw.gkh-setu.de) has joined #ceph
[10:45] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:7c62:b891:9b8a:4ede) has joined #ceph
[11:03] * kefu (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[11:04] * kefu (~kefu@114.92.96.253) has joined #ceph
[11:04] * rakeshgm (~rakesh@121.244.87.118) has joined #ceph
[11:09] * t4nk095 (~oftc-webi@abts-tn-dynamic-026.205.174.122.airtelbroadband.in) has joined #ceph
[11:13] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[11:15] * t4nk643 (~oftc-webi@abts-tn-dynamic-026.205.174.122.airtelbroadband.in) Quit (Ping timeout: 480 seconds)
[11:16] * Kioob (~Kioob@ALyon-658-1-170-70.w90-53.abo.wanadoo.fr) has joined #ceph
[11:20] * ade (~abradshaw@tmo-080-251.customers.d1-online.com) has joined #ceph
[11:20] <art_yo> Hi all! Could you tell me why does space dissapeare? Ceph -s shows:
[11:20] <art_yo> pgmap v473402: 256 pgs, 1 pools, 3227 GB data, 818 kobjects
[11:20] <art_yo> 7354 GB used, 11718 GB / 20094 GB avail
[11:21] * t4nk095 (~oftc-webi@abts-tn-dynamic-026.205.174.122.airtelbroadband.in) Quit (Ping timeout: 480 seconds)
[11:21] <art_yo> But df -kh shows that there is a lot of free space:
[11:21] <art_yo> [root@hulk ~]# df -kh /mnt/ceph
[11:21] <art_yo> Filesystem Size Used Avail Use% Mounted on
[11:21] <art_yo> /dev/rbd0 17T 7.8G 17T 1% /mnt/ceph
[11:21] <art_yo> I have one pool only
[11:25] * Nacer (~Nacer@176.31.89.99) has joined #ceph
[11:33] * t4nk931 (~oftc-webi@abts-tn-dynamic-026.205.174.122.airtelbroadband.in) has joined #ceph
[11:35] * thomnico (~thomnico@2a01:e35:8b41:120:cd45:6716:e80f:89c7) Quit (Quit: Ex-Chat)
[11:37] * rmart04 (~rmart04@support.memset.com) has joined #ceph
[11:49] <walcubi> -4> 2016-08-08 09:45:50.076878 7f1915f9e800 0 filestore(/var/lib/ceph/osd/ceph-3) write couldn't open 0.86_head/#0:612a782e:::38f8d660872d8587_23:head#: (13) Permission denied
[11:49] <walcubi> -3> 2016-08-08 09:45:50.076885 7f1915f9e800 0 filestore(/var/lib/ceph/osd/ceph-3) error (13) Permission denied not handled on operation 0x56384869e86b (8733808.0.0, or op 0, counting from 0)
[11:51] <walcubi> https://github.com/ceph/ceph/blob/28575db3fb1579cdfa85b14b0484363cc0634a2e/src/os/filestore/FileStore.cc#L2912
[11:51] <walcubi> I assume this means the journal ist kaput
[11:53] * t4nk931 (~oftc-webi@abts-tn-dynamic-026.205.174.122.airtelbroadband.in) Quit (Quit: Page closed)
[11:55] * aj__ (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[11:58] * kefu is now known as kefu|afk
[11:59] * rraja (~rraja@121.244.87.117) has joined #ceph
[12:00] <rmart04> Hi Guys, I???ve just upgraded to Jewel, looking to use the new swift multitenancy api functionality. Unfortunately, I???m getting lots of this type of error -> RGWZoneParams::create(): error creating default zone params: (17) File exists (I think this may be from a previous rgw installation that I???ve ripped out. I???ve taken a look at the mailing list, and I found some references, but not a lot in terms of how to fix it! I???ve torn it back down to sta
[12:00] <rmart04> again, and wondered if I need to do somthing to remove the ???default zone params???.? Does anyone have any ideas?
[12:03] * rakeshgm (~rakesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[12:05] * aj__ (~aj@fw.gkh-setu.de) has joined #ceph
[12:14] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[12:23] * guerby (~guerby@ip165.tetaneutral.net) Quit (Quit: Leaving)
[12:24] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:25] * rakeshgm (~rakesh@121.244.87.117) has joined #ceph
[12:27] * bniver (~bniver@pool-96-233-76-59.bstnma.fios.verizon.net) has joined #ceph
[12:29] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[12:30] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[12:31] * MrBy (~MrBy@85.115.23.2) Quit (Remote host closed the connection)
[12:33] <walcubi> https://s10.postimg.org/pupoly4ll/osdavgsize.png
[12:34] <walcubi> https://s10.postimg.org/novdrg155/clientops.png
[12:34] <walcubi> As the average size held on disk increases, the throughput drops like a stone.
[12:35] <walcubi> Just running 'ls' on the disk and the stat times are horrendous.
[12:35] * Kakeru (~Bj_o_rn@178.162.211.222) has joined #ceph
[12:36] <walcubi> This is an order of magnitude worse with btrfs
[12:37] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[12:38] <walcubi> Although btrfs interestingly has a much higher throughput to start off with.
[12:38] <walcubi> It just seems to choke when there are 20 million tiny files in the store
[12:40] <walcubi> XFS poor performance seems to be well documented.
[12:40] <walcubi> "The journal is usually never the performance problem except for small random write IO, as the log is a circular buffer that is access sequentially as the log just gets appended."
[12:40] * guerby (~guerby@ip165.tetaneutral.net) has joined #ceph
[12:41] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) Quit (Quit: ZNC - http://znc.in)
[12:41] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) has joined #ceph
[12:42] <walcubi> In other words, randomly writing 64 million 4Kb images was not meant for XFS, and they are denialists that there is even a problem. :-D
[12:43] <walcubi> Anyone here goes to Ceph Berlin Meet-ups?
[12:44] <walcubi> I may pick your brains.
[12:45] <walcubi> As I really don't think I should be seeing performance 10 orders of magnitude worse than using a local disk for storing objects.
[12:47] <walcubi> Hmm, maybe it's because internally it's moving more and more files into deeper and deeper levels of directories.
[12:50] * Nacer_ (~Nacer@176.31.89.99) has joined #ceph
[12:50] * Nacer (~Nacer@176.31.89.99) Quit (Read error: Connection reset by peer)
[12:53] <rmart04> any one know if there are any uptodate jewel docs for creating RGW services?
[12:53] <rmart04> feel like im just missing a step
[12:58] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[13:00] <walcubi> Maybe time to try out ext4, if I'm only using rados_aio_write() to send data to ceph, it won't hit some xattr limit, will it?
[13:01] <walcubi> I can only see the following set: user.ceph._ user.ceph._@1 user.ceph.snapset user.cephos.spill_out
[13:01] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) has joined #ceph
[13:03] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Quit: Leaving)
[13:05] * Kakeru (~Bj_o_rn@61TAAA58W.tor-irc.dnsbl.oftc.net) Quit ()
[13:05] * Nacer_ (~Nacer@176.31.89.99) Quit (Read error: Connection reset by peer)
[13:06] * Nacer (~Nacer@176.31.89.99) has joined #ceph
[13:08] * Nacer_ (~Nacer@pai34-5-88-176-168-157.fbx.proxad.net) has joined #ceph
[13:13] * rdas (~rdas@121.244.87.116) has joined #ceph
[13:15] * Nacer (~Nacer@176.31.89.99) Quit (Ping timeout: 480 seconds)
[13:20] * aj__ (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[13:23] * Catsceo (~Esvandiar@tor.yrk.urgs.uk0.bigv.io) has joined #ceph
[13:29] * aj__ (~aj@fw.gkh-setu.de) has joined #ceph
[13:32] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[13:42] * haomaiwang (~oftc-webi@61.149.85.206) Quit (Ping timeout: 480 seconds)
[13:44] * kefu|afk (~kefu@114.92.96.253) Quit (Max SendQ exceeded)
[13:45] * kefu (~kefu@114.92.96.253) has joined #ceph
[13:45] * Racpatel (~Racpatel@2601:87:0:24af::53d5) has joined #ceph
[13:48] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[13:48] * karnan (~karnan@2405:204:5502:b48e:3602:86ff:fe56:55ae) has joined #ceph
[13:50] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) has joined #ceph
[13:50] * ChanServ sets mode +o nhm
[13:53] * Catsceo (~Esvandiar@26XAAAWUI.tor-irc.dnsbl.oftc.net) Quit ()
[13:56] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[13:57] * georgem (~Adium@24.114.48.82) has joined #ceph
[13:58] * georgem (~Adium@24.114.48.82) Quit ()
[13:58] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:06] * Kurimus (~Drezil@0x667.crypt.gy) has joined #ceph
[14:07] * ira (~ira@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:08] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[14:11] * kefu (~kefu@114.92.96.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[14:14] * neurodrone_ (~neurodron@pool-100-35-225-168.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[14:20] * sickolog1 (~mio@vpn.bcs.hr) has joined #ceph
[14:21] * sickology (~mio@vpn.bcs.hr) Quit (Read error: Connection reset by peer)
[14:25] * sickology (~mio@vpn.bcs.hr) has joined #ceph
[14:25] * sickolog1 (~mio@vpn.bcs.hr) Quit (Read error: Connection reset by peer)
[14:27] * sickology (~mio@vpn.bcs.hr) Quit (Read error: Connection reset by peer)
[14:30] * sickology (~mio@vpn.bcs.hr) has joined #ceph
[14:32] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[14:34] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[14:36] * Kurimus (~Drezil@61TAAA6AF.tor-irc.dnsbl.oftc.net) Quit ()
[14:37] * bara (~bara@213.175.37.12) has joined #ceph
[14:38] * kefu (~kefu@114.92.96.253) has joined #ceph
[14:40] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[14:41] * karnan (~karnan@2405:204:5502:b48e:3602:86ff:fe56:55ae) Quit (Quit: Leaving)
[14:42] * giorgis (~oftc-webi@ppp-94-64-12-25.home.otenet.gr) Quit (Quit: Page closed)
[14:44] <rmart04> Does anyone know why 10.2.2 Keystone auth is asking for a revocation list? Cant it just query Keystone to see if the token is valid?
[14:44] <rmart04> (rgw)
[14:45] * AndroUser2 (~androirc@107.170.0.159) has joined #ceph
[14:49] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:50] * racpatel__ (~Racpatel@2601:87:0:24af::53d5) has joined #ceph
[14:50] * racpatel__ (~Racpatel@2601:87:0:24af::53d5) Quit ()
[14:50] * thomnico (~thomnico@2a01:e35:8b41:120:cd45:6716:e80f:89c7) has joined #ceph
[14:57] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[15:00] * AndroUser2 (~androirc@107.170.0.159) Quit (Remote host closed the connection)
[15:02] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[15:02] * vimal (~vikumar@121.244.87.116) Quit (Quit: Leaving)
[15:04] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[15:08] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[15:09] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[15:11] <rmart04> Looks like this might not be finished yet, does anyone know when it might be available?
[15:18] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[15:19] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Quit: Ex-Chat)
[15:19] * pdrakewe_ (~pdrakeweb@oh-76-5-108-60.dhcp.embarqhsd.net) has joined #ceph
[15:20] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[15:20] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) Quit (Quit: Leaving)
[15:20] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[15:21] * zapu (~Deiz@tor-exit.squirrel.theremailer.net) has joined #ceph
[15:21] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:30] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) has joined #ceph
[15:31] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[15:34] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:38] * boolman (boolman@79.138.78.238) Quit (Ping timeout: 480 seconds)
[15:39] * salwasser (~Adium@2601:197:101:5cc1:2124:50cc:d25d:16c0) has joined #ceph
[15:47] * mattbenjamin (~mbenjamin@12.118.3.106) has joined #ceph
[15:48] * thomnico (~thomnico@2a01:e35:8b41:120:cd45:6716:e80f:89c7) Quit (Quit: Ex-Chat)
[15:48] * rakeshgm (~rakesh@121.244.87.117) Quit (Quit: Leaving)
[15:51] * zapu (~Deiz@9YSAAA69P.tor-irc.dnsbl.oftc.net) Quit ()
[15:51] * tunaaja (~xul@108.61.122.139) has joined #ceph
[15:56] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) has joined #ceph
[15:56] * yanzheng (~zhyan@125.70.20.176) Quit (Quit: This computer has gone to sleep)
[15:57] * haomaiwang (~oftc-webi@114.242.248.222) has joined #ceph
[15:58] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) has joined #ceph
[15:58] * kefu (~kefu@114.92.96.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[16:00] * scuttle|afk is now known as scuttlemonkey
[16:03] * bene3 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[16:05] * haomaiwang (~oftc-webi@114.242.248.222) Quit (Ping timeout: 480 seconds)
[16:05] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:06] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:07] * EinstCrazy (~EinstCraz@61.165.253.184) has joined #ceph
[16:09] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[16:09] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[16:10] * Mosibi (~Mosibi@dld.unixguru.nl) Quit (Quit: Lost terminal)
[16:10] * Mosibi (~Mosibi@dld.unixguru.nl) has joined #ceph
[16:15] * haomaiwang (~oftc-webi@114.242.248.152) has joined #ceph
[16:17] * scuttlemonkey is now known as scuttle|afk
[16:19] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[16:20] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[16:21] * tunaaja (~xul@61TAAA6D1.tor-irc.dnsbl.oftc.net) Quit ()
[16:23] * AndroUser2 (~androirc@107.170.0.159) has joined #ceph
[16:24] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[16:24] * kefu (~kefu@114.92.96.253) has joined #ceph
[16:26] * scuttle|afk is now known as scuttlemonkey
[16:26] * EinstCrazy (~EinstCraz@61.165.253.184) Quit (Remote host closed the connection)
[16:27] * ira (~ira@1.186.34.66) has joined #ceph
[16:27] * AndroUser2 (~androirc@107.170.0.159) Quit (Remote host closed the connection)
[16:27] * EinstCrazy (~EinstCraz@61.165.253.184) has joined #ceph
[16:28] * ade (~abradshaw@tmo-080-251.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[16:28] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:29] * jfaj_ (~jan@p20030084AF2C12005EC5D4FFFEBB68A4.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[16:29] * jfaj_ (~jan@p4FE4F5FB.dip0.t-ipconnect.de) has joined #ceph
[16:34] * pdrakewe_ (~pdrakeweb@oh-76-5-108-60.dhcp.embarqhsd.net) Quit (Read error: Connection reset by peer)
[16:38] * andreww (~xarses@64.124.158.192) has joined #ceph
[16:39] * Enikma (~Kyso_@tor2r.ins.tor.net.eu.org) has joined #ceph
[16:40] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) has joined #ceph
[16:40] * ira (~ira@1.186.34.66) Quit (Quit: Leaving)
[16:42] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[16:43] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit ()
[16:44] * evelu (~erwan@46.231.131.178) Quit (Remote host closed the connection)
[16:46] * joshd1 (~jdurgin@2602:30a:c089:2b0:2cc3:6b9:a376:9349) has joined #ceph
[16:48] * AndroUser2 (~androirc@107.170.0.159) has joined #ceph
[16:48] * AndroUser2 (~androirc@107.170.0.159) Quit ()
[16:49] <s3an2> walcubi, in the future release bluestore may help you.
[16:49] * AndroUser2 (~androirc@107.170.0.159) has joined #ceph
[16:49] * AndroUser2 (~androirc@107.170.0.159) Quit ()
[16:49] * vbellur (~vijay@2601:18f:700:55b0:5e51:4fff:fee8:6a5c) Quit (Ping timeout: 480 seconds)
[16:51] * b0e (~aledermue@213.95.25.82) has joined #ceph
[16:52] * dnunez (~dnunez@209-6-91-147.c3-0.smr-ubr1.sbo-smr.ma.cable.rcn.com) has joined #ceph
[16:53] * rendar (~I@host211-181-dynamic.52-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[16:55] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) has joined #ceph
[16:56] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:57] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) Quit ()
[16:57] * wushudoin (~wushudoin@2601:646:8281:cfd:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:58] * salwasser (~Adium@2601:197:101:5cc1:2124:50cc:d25d:16c0) Quit (Quit: Leaving.)
[17:01] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[17:01] * Mosibi (~Mosibi@dld.unixguru.nl) Quit (Quit: leaving)
[17:01] * srk (~Siva@32.97.110.55) has joined #ceph
[17:01] * gregmark (~Adium@68.87.42.115) has joined #ceph
[17:03] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[17:03] <haomaiwang> kefu: could you confirm the change today?
[17:04] <kefu> haomaiwang, no.
[17:05] * Mosibi (~Mosibi@dld.unixguru.nl) has joined #ceph
[17:05] <kefu> haomaiwang, sorry, you might want to have another reviewer to avoid SPOF. and i suggested you to find another reviewer last time.
[17:06] <haomaiwang> kefu: ..ok....but I think this pr....I need your help...
[17:06] <haomaiwang> I could expect another review in the next pr
[17:07] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:07] <kefu> haomaiwang, i am not 100% sure that ppl would be happy with the hook.
[17:07] <kefu> haomaiwang even i am fine with it.
[17:07] <haomaiwang> kefu: Who need to be involved?
[17:07] <kefu> haomaiwang probably you could ping sage or sjusthm
[17:07] * swami2 (~swami@223.227.11.65) Quit (Ping timeout: 480 seconds)
[17:07] <haomaiwang> hmm, josh own some global codes
[17:08] <kefu> haomaiwang but sam is afk today and probably tomorrow.
[17:08] <haomaiwang> I think he may help...
[17:08] <kefu> good.
[17:08] <haomaiwang> joshd: ping
[17:09] * Enikma (~Kyso_@26XAAAW0G.tor-irc.dnsbl.oftc.net) Quit ()
[17:09] * n0x1d (~Dinnerbon@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[17:10] * aNupoisc (~adnavare@192.55.54.38) has joined #ceph
[17:10] * theTrav (~theTrav@CPE-124-188-218-238.sfcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[17:11] * blizzow (~jburns@50.243.148.102) has joined #ceph
[17:12] * penguinRaider_ (~KiKo@182.18.155.15) Quit (Ping timeout: 480 seconds)
[17:13] * vbellur (~vijay@nat-pool-bos-t.redhat.com) has joined #ceph
[17:16] * kefu is now known as kefu|afk
[17:16] <theanalyst> rmart04, the way keystone auth works is by default you wouldn't query keystone for every request to validate, you have the keystone's public keys and validate the token is "signed" by keystone
[17:16] <theanalyst> rmart04, keystone acts like a CA of sorts
[17:17] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[17:18] * truan-wang (~truanwang@114.111.166.3) has joined #ceph
[17:19] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:23] * tsg__ (~tgohad@192.55.54.45) has joined #ceph
[17:31] * penguinRaider_ (~KiKo@23.27.206.118) has joined #ceph
[17:31] * sudocat (~dibarra@104-188-116-197.lightspeed.hstntx.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[17:32] * truan-wang (~truanwang@114.111.166.3) Quit (Ping timeout: 480 seconds)
[17:33] * rmart04 (~rmart04@support.memset.com) Quit (Ping timeout: 480 seconds)
[17:33] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:34] * EinstCrazy (~EinstCraz@61.165.253.184) Quit (Read error: Connection reset by peer)
[17:35] <walcubi> s3an2, Yeah...
[17:35] <walcubi> I think some of the weirdness I'm seeing is a limitation of the FS.
[17:35] * cathode (~cathode@50.232.215.114) has joined #ceph
[17:35] * EinstCrazy (~EinstCraz@61.165.253.184) has joined #ceph
[17:36] <walcubi> So when going from an empty pool to 100GB in 2 hours (roughly 1.3 million files)
[17:37] <walcubi> Throughput drops from 1000-1500ops to 250ops.
[17:38] <walcubi> Then it speeds up again some time later.
[17:38] * danieagle (~Daniel@179.110.8.48) has joined #ceph
[17:39] * n0x1d (~Dinnerbon@26XAAAW1G.tor-irc.dnsbl.oftc.net) Quit ()
[17:39] <walcubi> I think the reason why I am always seeing this regardless of whether I choose btrfs, xfs or ext4 is because it is rebalancing all files locally.
[17:40] <joshd1> haomaiwang: pong
[17:40] <walcubi> Going from one level osd/current/pg.xxx/DIR_0 to more and more deeper nested structure.
[17:41] <walcubi> Once it stops doing this, it speeds up. But when time comes to lower all files again, throughput drops.
[17:41] <walcubi> And this takes longer each time it needs to do so.
[17:42] * pdrakewe_ (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) has joined #ceph
[17:42] <walcubi> That's my best guess anyway.
[17:42] <haomaiwang> joshd1: there is a commit want to get your verify. It's in https://github.com/ceph/ceph/pull/10264/commits/34263419d2df486fdd6e367c11d7ce149397befc in PR( https://github.com/ceph/ceph/pull/10264). You can look at the last kefu comment to know the background. Kefu think it's ok, but he want more reviewer especially global code changer verify.
[17:42] <haomaiwang> I think it's just a generic wrapper for global_init_prefork/global_init_postfork
[17:43] <haomaiwang> do you think so?
[17:43] * pdrakeweb (~pdrakeweb@cpe-71-74-153-111.neo.res.rr.com) Quit (Read error: Connection reset by peer)
[17:44] * Nacer_ (~Nacer@pai34-5-88-176-168-157.fbx.proxad.net) Quit (Remote host closed the connection)
[17:47] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[17:47] * bniver (~bniver@pool-96-233-76-59.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[17:49] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[17:50] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[17:57] <s3an2> walcubi, Can you maybe setup a test with bluestore to see if the issue is only with xfs, be interesting to see how well the new backend stands upto the job.
[18:00] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[18:00] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[18:02] * SinZ|offline (~Maza@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[18:02] * dnunez (~dnunez@209-6-91-147.c3-0.smr-ubr1.sbo-smr.ma.cable.rcn.com) Quit (Quit: Leaving)
[18:04] <joshd1> haomaiwang: seems like a reasonably generic thing to me
[18:04] <haomaiwang> joshd1: thanks, could you comment there? let kefu confirm this pr
[18:05] * rraja (~rraja@121.244.87.117) Quit (Quit: Leaving)
[18:09] * aNupoisc (~adnavare@192.55.54.38) Quit (Remote host closed the connection)
[18:09] * EinstCrazy (~EinstCraz@61.165.253.184) Quit (Remote host closed the connection)
[18:10] <walcubi> s3an2, You mean only filestore?
[18:10] <walcubi> I can reliably reproduce on all filesystems. :-)
[18:11] <walcubi> Yeah, that will be my next step. I have just had 15 new servers delivered with SSDs, so I'm going to give that a spin just now.
[18:12] <walcubi> In the original setup, I only had normal 2TB disks. Having millions of tiny files probably didn't help stat() times.
[18:12] <s3an2> walcubi, yea s/xfs/filestore
[18:12] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[18:13] <walcubi> s3an2, ceph-deploy install --testing ?
[18:14] <s3an2> Every FileSystem I have used seems to have problems with millions of tiny files, I Hope BlueStore is different ;)
[18:15] <s3an2> I think with Jewel you just need to set
[18:15] <s3an2> [global]
[18:15] <s3an2> enable experimental unrecoverable data corrupting features = *
[18:15] <s3an2> osd objectstore = bluestore
[18:21] <nhm> bluestore is changing rapidly
[18:23] <nhm> specifically, the work being done on encode/decode right now is going to have a big effect since it's changing the size of metadata being stored in rocksdb.
[18:23] <nhm> that's one of the big things that makes millions of tiny files hard.
[18:24] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[18:25] * Kioob (~Kioob@ALyon-658-1-170-70.w90-53.abo.wanadoo.fr) Quit (Quit: Leaving.)
[18:25] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[18:27] * rendar (~I@host121-176-dynamic.52-79-r.retail.telecomitalia.it) has joined #ceph
[18:32] * SinZ|offline (~Maza@61TAAA6HW.tor-irc.dnsbl.oftc.net) Quit ()
[18:34] * rmart04 (~rmart04@host109-155-213-112.range109-155.btcentralplus.com) has joined #ceph
[18:38] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:42] <walcubi> nhm, is it reasonable to assume that this kind of environment might work with ceph?
[18:43] <nhm> walcubi: lots of little files?
[18:43] <walcubi> Billions of tiny 4KB-128KB images, no locality of reference (cache is useless).
[18:43] <nhm> Are you talking about cephfs?
[18:43] <walcubi> Nah, cephfs was a *massive* fail
[18:43] <rmart04> Hi ???theanalyst???, thanks for that. Hence the additional revoke list. It looked as though its been updated to use Auth/Tenant/Password auth that can validate like a normal OpenStack service (in the master docs). Ill keep playing, thanks for your help!
[18:44] <walcubi> nhm, We're currently trialling out using librados directly.
[18:44] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[18:44] <walcubi> Using AIO, naturally.
[18:44] <nhm> walcubi: I think John/Greg are still tracking down a lot of MDS bottlenecks
[18:44] <walcubi> The only operations we care about are read(), write() and stat()
[18:44] <walcubi> Especially stat()
[18:45] <nhm> walcubi: ok, so lots of little objects. Split/merge in filestore is going to be painful. Bluestore potentially will be better, but like I said, we're doing a lot of work right now on encode/decode and there's been some indications we've got work to do in the bitmap allocator.
[18:45] <s3an2> nhm, Yes multiple active MDS may help cephfs in the case I think
[18:45] <nhm> walcubi: ie I think bluestore is a better base to build on, but we've still got work to do.
[18:46] <walcubi> nhm, Yeah, I've looked up the slides. It does look like a step in the right direction.
[18:47] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[18:48] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[18:48] <walcubi> Currently the goal is to get as close to local SSD speed as possible. I think if I start using SSDs for OSDs, then that will help the scaling problem.
[18:48] <theanalyst> rmart04, np
[18:50] <nhm> walcubi: it remains to be seen how well rocksdb will handle the metadata load. We know there are potentially going to be some write/read amp issues. Sandisk is implementing an interface for their zetascale k/v store which may be a good alternative.
[18:50] <nhm> walcubi: potentially if we could store metadata on nvdimms or something like 3D xpoint that could be a very interesting solution for keeping SSD backed OSDs fast.
[18:50] <nhm> walcubi: but that's all down the road sort of stuff.
[18:51] * chopmann (~sirmonkey@2a02:8108:46c0:4315:df5d:ba6a:6149:9b4d) has joined #ceph
[18:52] <walcubi> nhm, For small a small setup - around 150GB / 5 million objects. I was getting stat() times faster than SSDs. However I reckon that was testing how fast ceph's osd cache is. ;-)
[18:54] <nhm> yeah, cache likes to interfere. :)
[18:56] * kefu|afk (~kefu@114.92.96.253) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:00] * jpierre03 (~jpierre03@voyage.prunetwork.fr) Quit (Ping timeout: 480 seconds)
[19:00] * rmart04 (~rmart04@host109-155-213-112.range109-155.btcentralplus.com) Quit (Quit: rmart04)
[19:01] * joshd1 (~jdurgin@2602:30a:c089:2b0:2cc3:6b9:a376:9349) Quit (Quit: Leaving.)
[19:10] * haomaiwang (~oftc-webi@114.242.248.152) Quit (Ping timeout: 480 seconds)
[19:14] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[19:15] <blizzow> I I run an rbd bench-write command, the bench output is showing ~6000-10000 ops per second. When I log into a mon and do ceph -w while the bench is running, it's showing consistently 200-600 op/s. Is there some difference in the way they're counting?
[19:15] <blizzow> Or what they're measuring?
[19:15] * aj__ (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[19:19] * flesh (~oftc-webi@static.ip-171-033-130-093.signet.nl) Quit (Quit: Page closed)
[19:21] * salwasser (~Adium@2601:197:101:5cc1:78ee:d3d0:1424:54f4) has joined #ceph
[19:24] * aj__ (~aj@fw.gkh-setu.de) has joined #ceph
[19:36] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[19:36] * salwasser (~Adium@2601:197:101:5cc1:78ee:d3d0:1424:54f4) Quit (Quit: Leaving.)
[19:36] * dnunez (~dnunez@ceas-nat.EECS.Tufts.EDU) has joined #ceph
[19:36] <SamYaple> blizzow: if the writeback caching is configured in ceph.conf, it might be displaying cached results. test it longer and larger to invalidate the cache or turn of the caching
[19:40] * tsg_ (~tgohad@192.55.54.40) has joined #ceph
[19:43] * salwasser (~Adium@c-76-118-229-231.hsd1.ma.comcast.net) has joined #ceph
[19:43] * salwasser (~Adium@c-76-118-229-231.hsd1.ma.comcast.net) Quit ()
[19:44] * danielsj (~hgjhgjh@df.85.7a9f.ip4.static.sl-reverse.com) has joined #ceph
[19:44] * dnunez (~dnunez@ceas-nat.EECS.Tufts.EDU) Quit (Ping timeout: 480 seconds)
[19:45] * tsg__ (~tgohad@192.55.54.45) Quit (Remote host closed the connection)
[19:45] * mykola (~Mikolaj@91.245.79.221) has joined #ceph
[19:49] <walcubi> nhm, I find it amusing that so far using ext4 filestore is outperforming both btrfs and xfs - at least for the moment... :-)
[19:49] <nhm> walcubi: it's not impossible
[19:49] <nhm> walcubi: there were some tests back in the day when we regularly tested them against each other where ext4 won.
[19:50] <walcubi> Just a shame that it's been deprecated because of lack of xattr support.
[19:51] <nhm> walcubi: I don't remember why, but even before that Sam had been concerned about some of the ext4 internals.
[19:51] <SamYaple> walcubi: long term, all the results show xfs having the most _consistent_ performance. as btrfs fills up it gets slower, and ext4 has some aging issues as well
[19:51] <SamYaple> not to mention the xattr limitations
[19:52] <SamYaple> i do plan on using ext4 with bluestore though
[19:52] <nhm> SamYaple: why?
[19:52] <SamYaple> nhm: i prefer ext4 over xfs
[19:53] <SamYaple> its unlikely to affect perforamnce of bluestore itself
[19:53] <nhm> SamYaple: why not use the built in block allocator?
[19:53] <SamYaple> nhm: no no, i mean the key partition part
[19:53] <SamYaple> to store the few files that need ot exist
[19:53] <walcubi> SamYaple, considering that I'm using librados directly. Could the internal xattrs be a problem using ext4?
[19:53] <nhm> SamYaple: oh, that basically won't matter, but sure.
[19:53] <SamYaple> nhm: thats what i was saying
[19:54] <SamYaple> walcubi: i believe the xattrs is a osd storage concern, you still use osds even using librados directly
[19:54] <walcubi> I'm aware that cephfs and rgw need more space for metadata, as an example.
[19:54] <SamYaple> its something to do with assuring they all get written out correctly since the kernel has no atomic commit
[19:54] <walcubi> Ah
[19:55] * keeperandy (~textual@50.245.231.209) has joined #ceph
[19:56] * davidzlap (~Adium@2605:e000:1313:8003:cdd9:3191:5b30:49ef) has joined #ceph
[19:56] * dnunez (~dnunez@130.64.25.58) has joined #ceph
[20:00] <walcubi> SamYaple, out of curiousity, is it possible to predefine how deep the nested levels are in the ceph filestore?
[20:00] <SamYaple> walcubi: actually i think there is. let me look that up. i remember something about that from a few years back
[20:01] <walcubi> What I mean is, if split/merge is such a heavy operation. And I *know* how much data I'm going to be putting into ceph beforehand
[20:02] <SamYaple> walcubi: nvm im thinking of squid
[20:02] <SamYaple> i dont believe there is a way to do what youre asking in ceph walcubi, but im not positive so keep digging
[20:03] * chopmann (~sirmonkey@2a02:8108:46c0:4315:df5d:ba6a:6149:9b4d) Quit (Quit: chopmann)
[20:16] * danielsj (~hgjhgjh@df.85.7a9f.ip4.static.sl-reverse.com) Quit (Ping timeout: 480 seconds)
[20:20] * keeperandy (~textual@50.245.231.209) Quit (Quit: Textual IRC Client: www.textualapp.com)
[20:34] * valeech_ (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) has joined #ceph
[20:34] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[20:35] * tsg_ (~tgohad@192.55.54.40) Quit (Remote host closed the connection)
[20:36] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:7c62:b891:9b8a:4ede) Quit (Ping timeout: 480 seconds)
[20:37] * valeech (~valeech@pool-96-247-203-33.clppva.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[20:37] * valeech_ is now known as valeech
[20:38] * tsg (~tgohad@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[20:38] <walcubi> HashIndex::init_split_folder() looks promising...
[20:40] * aj__ (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[20:43] <walcubi> SamYaple, https://github.com/ceph/ceph/blob/3ab63045df1c8b443f4e76a7d88d6d31735f45b7/src/os/filestore/CollectionIndex.h#L181-L191
[20:46] * Discovery (~Discovery@109.235.52.4) has joined #ceph
[20:46] <walcubi> Looks like if I can make ceph call this method with expected_num_objs = (expected files / num pgs). It will go ahead and create all nested levels for me.
[20:49] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[20:50] * aj__ (~aj@fw.gkh-setu.de) has joined #ceph
[20:56] <SamYaple> walcubi: cool! thansk for the info
[20:57] <walcubi> Looks like the caller comes from here: https://github.com/ceph/ceph/blob/aadc9ae13978294cebf970345a73e5584f34b923/src/osd/PG.cc#L2826
[20:58] <walcubi> And pool->expected_num_objects is set on pool creation!
[20:58] <walcubi> http://docs.ceph.com/docs/jewel/rados/operations/pools/#create-a-pool
[20:58] <walcubi> Voila
[20:59] <walcubi> I could have avoided stepping through the code after all that. :-P
[20:59] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[21:04] <walcubi> Feature request, positional command-line arguments
[21:05] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[21:05] <walcubi> So I can just type ceph osd pool create test expected-num-objects=xxx
[21:20] * garphy`aw is now known as garphy
[21:23] * aj__ (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[21:34] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[21:35] * Unai (~Adium@50-115-70-150.static-ip.telepacific.net) has joined #ceph
[21:35] * Pulp (~Pulp@63-221-50-195.dyn.estpak.ee) has joined #ceph
[21:36] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:38] <Unai> Hello guys??? Quick question. One of the monitor nodes in a 3 monitor cluster just died. I don't think it's gonna recover. Is it better to remove the deceased monitor asap so the other monitors don't try to access it unsuccsssfully ( and of course add another one as soon as possible ) or should I leave it until we add another monitor in place and then remove it?
[21:39] * Miouge_ (~Miouge@109.128.94.173) has joined #ceph
[21:42] * Miouge (~Miouge@109.128.94.173) Quit (Ping timeout: 480 seconds)
[21:42] * Miouge_ is now known as Miouge
[21:46] <SamYaple> Unai: personally, I would remove the bad monitor (unless the data can be recovered) and kill the keys
[21:50] <Unai> Thanks???. That's my mindset but I wanted to make sure
[21:50] <The1_> .. and then add a new 3rd MON asap
[21:51] <Unai> Thanks SamYaple
[21:51] <Unai> That's the plan!
[21:53] <SamYaple> Unai: yea just to be clear, with only two monitors up, if one goes down the cluster grinds to a halt
[21:53] <SamYaple> Unai: thats the state you are in right now, just fyi
[21:53] <Unai> yeah??? I appreciate that. Thanks for that :)
[21:58] * ircolle (~Adium@2601:285:201:633a:4df6:620d:b2a0:5bed) has joined #ceph
[22:02] * leandrojpg (~IceChat9@189-12-20-140.user.veloxzone.com.br) has joined #ceph
[22:02] <leandrojpg> hi
[22:03] <leandrojpg> even adding the hammer repair, to run the deploy install it insists on getting the jewell someone can tell why this happens in centos 6.5
[22:05] <leandrojpg> someone
[22:05] <leandrojpg> someone on line?
[22:07] * reed (~reed@216.38.134.18) has joined #ceph
[22:07] <leandrojpg> ojpg> even adding the hammer repair, to run the deploy install it insists on getting the jewell someone can tell why this happens in centos 6.5
[22:07] <leandrojpg> [17:05.13] <leandrojpg> someone
[22:08] <leandrojpg> reed for help
[22:08] <reed> ?
[22:09] <leandrojpg> you can help me in this doubt
[22:09] * qable (~skrblr@tor2r.ins.tor.net.eu.org) has joined #ceph
[22:09] * reed has no clue ... i should not be in this channel anymore :)
[22:09] * reed (~reed@216.38.134.18) has left #ceph
[22:10] <leandrojpg> ok
[22:10] <leandrojpg> thanks
[22:11] <leandrojpg> someone?
[22:16] <SamYaple> leandrojpg: can you give a bit more detail?
[22:16] * aj__ (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[22:16] <SamYaple> leandrojpg: are you talking about ceph-deploy?
[22:17] <leandrojpg> yes a can
[22:17] <leandrojpg> yes about ceph-deploy install
[22:18] <SamYaple> are you setting the release properly with ceph-deploy?
[22:18] <SamYaple> can you paste exactly the command you are using please?
[22:18] <leandrojpg> ok
[22:19] <leandrojpg> moment
[22:20] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:21] * mykola (~Mikolaj@91.245.79.221) Quit (Quit: away)
[22:24] <leandrojpg> see you
[22:24] <leandrojpg> my SO version centos 6.5
[22:29] <leandrojpg> to run the ceph-deploy install it insists on getting the jewell and not the hammer, as they do not have version 6.5 to hundreds using the hammer
[22:29] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) Quit (Ping timeout: 480 seconds)
[22:29] <leandrojpg> [ceph01][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[22:29] <leandrojpg> [ceph01][INFO ] Running command: sudo rpm --import https://download.ceph.com/keys/release.asc
[22:29] <leandrojpg> [ceph01][INFO ] Running command: sudo rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el6/noarch/ceph-release-1-0.el6.noarch.rpm
[22:29] <leandrojpg> [ceph01][WARNIN] curl: (22) The requested URL returned error: 404 Not Found
[22:29] <leandrojpg> [ceph01][DEBUG ] Retrieving https://download.ceph.com/rpm-jewel/el6/noarch/ceph-release-1-0.el6.noarch.rpm
[22:29] <leandrojpg> [ceph01][WARNIN] error: skipping https://download.ceph.com/rpm-jewel/el6/noarch/ceph-release-1-0.el6.noarch.rpm - transfer failed
[22:29] <leandrojpg> [ceph01][ERROR ] RuntimeError: command returned non-zero exit status: 1
[22:29] <leandrojpg> [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: rpm -Uvh --replacepkgs https://download.ceph.com/rpm-jewel/el6/noarch/ceph-release-1-0.el6.noarch.rpm
[22:30] <SamYaple> leandrojpg: ceph-depoly install --release hammer
[22:30] <leandrojpg> hummm ok
[22:31] <The1_> I replorted a bug in ceph-deply a loong time ago where no use of --release does strange things..
[22:31] <The1_> this seems like another.. :)
[22:32] <leandrojpg> this anomaly to hundreds is debian beautiful wheel
[22:32] <leandrojpg> [ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph-source'
[22:33] <leandrojpg> new error
[22:35] * Jeffrey4l__ (~Jeffrey@119.251.128.22) has joined #ceph
[22:38] * Jeffrey4l_ (~Jeffrey@110.252.60.190) Quit (Ping timeout: 480 seconds)
[22:39] <leandrojpg> There prediction to hit this error?
[22:39] <leandrojpg> SamYaple
[22:39] * qable (~skrblr@61TAAA6N9.tor-irc.dnsbl.oftc.net) Quit ()
[22:40] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[22:44] * Ethan_L (~lamberet@cce02cs4039-fa12-z.ams.hpecore.net) has joined #ceph
[22:46] * evelu (~erwan@37.164.227.207) has joined #ceph
[22:49] <leandrojpg> this is a python error output can ignore?
[22:49] <leandrojpg> Error in sys.exitfunc:
[22:52] * Miouge (~Miouge@109.128.94.173) Quit (Quit: Miouge)
[22:53] <leandrojpg> SamYaple?
[22:56] * Ethan_L (~lamberet@cce02cs4039-fa12-z.ams.hpecore.net) Quit (Remote host closed the connection)
[22:57] * Discovery (~Discovery@109.235.52.4) Quit (Ping timeout: 480 seconds)
[23:06] * evelu (~erwan@37.164.227.207) Quit (Ping timeout: 480 seconds)
[23:12] * natarej (~natarej@101.188.54.14) Quit (Read error: No route to host)
[23:12] * haplo37 (~haplo37@199.91.185.156) Quit (Ping timeout: 480 seconds)
[23:14] <leandrojpg> Sam Yaple?
[23:15] * Nacer (~Nacer@pai34-5-88-176-168-157.fbx.proxad.net) has joined #ceph
[23:16] <leandrojpg> [17:49.49] <leandrojpg> this is a python error output can ignore?
[23:16] <leandrojpg> [17:49.49] <leandrojpg> Error in sys.exitfunc:
[23:17] <leandrojpg> someone?
[23:17] <leandrojpg> help
[23:18] * AndroUser2 (~androirc@107.170.0.159) has joined #ceph
[23:21] * AndroUser2 (~androirc@107.170.0.159) Quit (Remote host closed the connection)
[23:23] * Nacer (~Nacer@pai34-5-88-176-168-157.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[23:23] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[23:26] * dnunez (~dnunez@130.64.25.58) Quit (Quit: Leaving)
[23:26] * vbellur (~vijay@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving.)
[23:28] <blizzow> Is there a good way to migrate data between ceph clusters?
[23:29] <blizzow> Exporting an image from one cluster and re-importing it into a new one is excruciating.
[23:30] <SamYaple> blizzow: rbd-mirror is a decent way, if you have jewel
[23:30] * Miouge (~Miouge@109.128.94.173) has joined #ceph
[23:34] * kevinc (~kevinc__@client64-35.sdsc.edu) has joined #ceph
[23:39] * mattbenjamin (~mbenjamin@12.118.3.106) Quit (Quit: Leaving.)
[23:41] <srk> leandrojpg: I've not used ceph-deploy in a while. It used to throw that error during osd creation, even thought osd create was successful. so, afaik, the sys.exitfunc error can be ignored
[23:46] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:47] <blizzow> SamYaple: 1 is Jewel, the other is infernalis.
[23:50] <SamYaple> blizzow: hmmmmm the way rbd mirror works means it *might* be able to pull from an infernalis cluster
[23:50] <SamYaple> blizzow: im not sure to be honest, only played with it once
[23:50] * Miouge (~Miouge@109.128.94.173) Quit (Ping timeout: 480 seconds)
[23:51] <jdillaman> SamYaple: negative -- journaling support is required and infernalis OSDs don't support it
[23:51] <blizzow> welp, okey dokey.
[23:51] <blizzow> thanks guys.
[23:51] <SamYaple> jdillaman: shoot.
[23:52] <SamYaple> blizzow: i mean you can snapshot and send and kind of work out your own transfer message
[23:52] <SamYaple> method*
[23:52] <jdillaman> blizzow: usual path I here ppl use is to slowly add the new cluster nodes to the old cluster, and then slowly drain off the old nodes
[23:52] <SamYaple> (or upgrade the infernalis to jewel and use rbd-mirror)
[23:55] <blizzow> I was really concerned about the upgrade from Infernalis on Ubuntu Trusty to Jewel on Ubuntu Xenial. I'd rather suffer the pain of migrating images manually between the two than blow out my whole cluster and drive our business to a halt
[23:57] * LegalResale (~LegalResa@66.165.126.130) Quit (Quit: Leaving)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.