#ceph IRC Log

Index

IRC Log for 2015-04-09

Timestamps are in GMT/BST.

[0:00] * bgleb__ (~bgleb@94.19.146.224) Quit (Remote host closed the connection)
[0:00] * ircolle (~Adium@c-71-229-136-109.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[0:00] * tupper_ (~tcole@rtp-isp-nat1.cisco.com) has joined #ceph
[0:03] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[0:04] * qhartman (~qhartman@den.direwolfdigital.com) has joined #ceph
[0:05] * bene2 (~ben@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[0:11] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[0:13] * dupont-y (~dupont-y@2a01:e34:ec92:8070:f036:a06f:495f:db99) Quit (Quit: Ex-Chat)
[0:20] * narthollis (~shishi@98EAAA17O.tor-irc.dnsbl.oftc.net) Quit ()
[0:20] * Epi (~Rens2Sea@marcuse-1.nos-oignons.net) has joined #ceph
[0:20] * Concubidated1 (~Adium@71.21.5.251) has joined #ceph
[0:21] * DiabloD3 (~diablo@exelion.net) Quit (Quit: do coders dream of sheep()?)
[0:21] * Concubidated (~Adium@71.21.5.251) Quit (Read error: Connection reset by peer)
[0:26] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[0:31] * oms101 (~oms101@p20030057EA011900EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[0:32] * davidz1 (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) has joined #ceph
[0:32] * davidz (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) Quit (Read error: Connection reset by peer)
[0:34] * daniel2_ (~daniel2_@209.163.140.194) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[0:40] * oms101 (~oms101@p20030057EA186700EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[0:46] * PerlStalker (~PerlStalk@162.220.127.20) Quit (Quit: ...)
[0:46] * bkopilov (~bkopilov@bzq-79-182-101-4.red.bezeqint.net) has joined #ceph
[0:50] * Epi (~Rens2Sea@5NZAABCK3.tor-irc.dnsbl.oftc.net) Quit ()
[0:50] * Swompie` (~Enikma@exit1.tor-proxy.net.ua) has joined #ceph
[0:54] * sbfox (~Adium@72.2.49.50) has joined #ceph
[0:56] * davidz (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) has joined #ceph
[0:56] * davidz1 (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) Quit (Read error: Connection reset by peer)
[0:57] * dupont-y (~dupont-y@2a01:e34:ec92:8070:f036:a06f:495f:db99) has joined #ceph
[0:57] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Ping timeout: 480 seconds)
[0:57] * fdmanana (~fdmanana@bl13-135-166.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[0:58] * davidz (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) Quit (Read error: Connection reset by peer)
[0:59] * davidz (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) has joined #ceph
[1:01] * dupont-y (~dupont-y@2a01:e34:ec92:8070:f036:a06f:495f:db99) Quit ()
[1:02] * davidz1 (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) has joined #ceph
[1:02] * davidz (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) Quit (Read error: Connection reset by peer)
[1:05] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[1:07] * ChrisNBlum (~ChrisNBlu@178.255.153.117) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[1:07] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) has joined #ceph
[1:09] * sbfox (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[1:10] * bkopilov (~bkopilov@bzq-79-182-101-4.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[1:10] * bkopilov (~bkopilov@bzq-79-178-48-220.red.bezeqint.net) has joined #ceph
[1:14] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:15] * fsimonce (~simon@host178-188-dynamic.26-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:15] * CheKoLyN (~saguilar@bender.parc.xerox.com) Quit (Quit: Leaving)
[1:16] * zack_dol_ (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:16] * wogri (~wolf@nix.wogri.at) Quit (Remote host closed the connection)
[1:16] * wogri (~wolf@nix.wogri.at) has joined #ceph
[1:19] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[1:20] * Swompie` (~Enikma@425AAAIMS.tor-irc.dnsbl.oftc.net) Quit ()
[1:20] * sese_ (~drdanick@digi00377.digicube.fr) has joined #ceph
[1:24] * rendar (~I@host90-128-dynamic.61-82-r.retail.telecomitalia.it) Quit ()
[1:30] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) has joined #ceph
[1:31] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) has joined #ceph
[1:31] * Groink (~Any@173-228-39-18.dsl.static.fusionbroadband.com) Quit (Quit: My damn controlling terminal disappeared!)
[1:32] * sbfox (~Adium@72.2.49.50) has joined #ceph
[1:33] <Kupo1> Anyone else getting 'fault with nothing to send, going to standby' on 0.94?
[1:34] * sherlocked (~watson@14.139.82.6) has joined #ceph
[1:34] * oms101 (~oms101@p20030057EA186700EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:35] * sherlocked (~watson@14.139.82.6) Quit ()
[1:42] * joef (~Adium@2601:9:280:f2e:84d6:a3ea:5ff9:c0ac) Quit (Quit: Leaving.)
[1:43] * oms101 (~oms101@2003:57:ea2a:b300:eef4:bbff:fe0f:7062) has joined #ceph
[1:43] * sbfox (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[1:46] * sbfox (~Adium@72.2.49.50) has joined #ceph
[1:50] * sese_ (~drdanick@5NZAABCNN.tor-irc.dnsbl.oftc.net) Quit ()
[1:50] * Jourei (~AGaW@tor-exit-hirsiali.unsecu.re) has joined #ceph
[1:53] * visbits_ (~textual@cpe-174-101-246-167.cinci.res.rr.com) Quit (Ping timeout: 480 seconds)
[1:59] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:08] * KevinPerks1 (~Adium@cpe-75-177-32-14.triad.res.rr.com) has left #ceph
[2:12] * LeaChim (~LeaChim@host86-151-147-249.range86-151.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:13] * zack_dolby (~textual@nfmv001073206.uqw.ppp.infoweb.ne.jp) has joined #ceph
[2:13] * alram_ (~alram@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:14] * p66kumar (~p66kumar@74.119.205.248) Quit (Quit: p66kumar)
[2:17] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:20] * Jourei (~AGaW@5NZAABCOQ.tor-irc.dnsbl.oftc.net) Quit ()
[2:20] * hoopy (~dicko@aurora.enn.lu) has joined #ceph
[2:27] * puffy1 (~puffy@216.207.42.129) Quit (Ping timeout: 480 seconds)
[2:29] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Quit: segutier)
[2:34] * sbfox (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[2:36] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[2:37] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has left #ceph
[2:50] * hoopy (~dicko@5NZAABCP6.tor-irc.dnsbl.oftc.net) Quit ()
[2:50] * RaidSoft (~SaneSmith@5.135.85.23) has joined #ceph
[2:50] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[2:53] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[3:02] * yanzheng (~zhyan@171.216.95.48) has joined #ceph
[3:04] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[3:09] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:13] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:15] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[3:20] * RaidSoft (~SaneSmith@5NZAABCRB.tor-irc.dnsbl.oftc.net) Quit ()
[3:24] * shishi (~Kealper@aurora.enn.lu) has joined #ceph
[3:28] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[3:32] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[3:34] * elder (~elder@50.250.13.174) Quit (Quit: Leaving)
[3:39] * root4 (~root@p57B2EBA5.dip0.t-ipconnect.de) has joined #ceph
[3:40] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:42] * zhaochao (~zhaochao@111.161.77.236) has joined #ceph
[3:42] * MrHeavy_ (~mrheavy@pool-108-54-190-117.nycmny.fios.verizon.net) has joined #ceph
[3:46] * root3 (~root@p57B2F074.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:49] * MrHeavy (~mrheavy@pool-108-54-190-117.nycmny.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[3:52] * wushudoin (~wushudoin@2601:9:4b00:f10:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[3:54] * shishi (~Kealper@2WVAABAG5.tor-irc.dnsbl.oftc.net) Quit ()
[3:56] * skullone (~skullone@shell.skull-tech.com) has joined #ceph
[3:59] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) has joined #ceph
[4:06] * hellertime (~Adium@a72-246-185-10.deploy.akamaitechnologies.com) has joined #ceph
[4:08] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) Quit (Remote host closed the connection)
[4:09] * davidz (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) has joined #ceph
[4:09] * davidz1 (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) Quit (Read error: Connection reset by peer)
[4:24] * hellertime (~Adium@a72-246-185-10.deploy.akamaitechnologies.com) Quit (Ping timeout: 480 seconds)
[4:29] * debian112 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[4:29] * matx (~lmg@192.42.116.16) has joined #ceph
[4:29] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) has joined #ceph
[4:30] * omar_m (~omar_m@cpe-72-182-46-23.austin.res.rr.com) has joined #ceph
[4:31] * davidz1 (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) has joined #ceph
[4:31] * davidz (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) Quit (Read error: Connection reset by peer)
[4:36] * kefu (~kefu@114.92.108.72) has joined #ceph
[4:37] * davidz (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) has joined #ceph
[4:37] * davidz1 (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) Quit (Read error: Connection reset by peer)
[4:41] * Henry (~oftc-webi@45.62.104.34.16clouds.com) has joined #ceph
[4:42] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:48] * omar_m (~omar_m@cpe-72-182-46-23.austin.res.rr.com) Quit (Remote host closed the connection)
[4:51] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[4:52] * kefu (~kefu@114.92.108.72) Quit (Max SendQ exceeded)
[4:52] * kefu (~kefu@114.92.108.72) has joined #ceph
[4:54] * bandrus (~brian@128.sub-70-211-79.myvzw.com) Quit (Quit: Leaving.)
[4:58] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[4:59] * matx (~lmg@98EAAA2BV.tor-irc.dnsbl.oftc.net) Quit ()
[4:59] * Izanagi (~Qiasfah@tor-exit.server9.tvdw.eu) has joined #ceph
[5:05] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[5:06] * Egyptian[Laptop] (~marafa@cpe-98-26-77-230.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:07] * Aid2 (491aa3ec@107.161.19.109) has joined #ceph
[5:08] <Aid2> Hi, has libgibraltar been considered for ceph?
[5:09] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Quit: Leaving.)
[5:16] <kefu> Aid2: it's always desirable to have better performance when encoding/decoding for EC pools. but i think that there is good chance that the OSDs are running on headless boxes where no GPU is equipped. but i agree, it's possible that there is some scenarios that the NVIDIA GPU is available and user wants to take advantage of it. an erasure plugin supporting this is nice to have. but i don't think that's the major concern.
[5:17] <Aid2> kefu:thank you for that.
[5:17] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[5:18] <Aid2> I am looking for performance benchmarks for ceph using EC, do you know of any good ones?
[5:20] <Aid2> Also, buying headless boxes with lower end CPUs and perhaps using GPUs could be more cost effective than having 5x the amount of nodes and more infrastructure
[5:22] * omar_m (~omar_m@cpe-72-182-46-23.austin.res.rr.com) has joined #ceph
[5:23] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[5:24] <kefu> Aid2: sorry i don't have a benchmark. probably, you can ask in #ceph-devel or the mailing list.
[5:24] <Aid2> The question is too, is the actual bottleneck of doing erasure coding the CPU itself and would a GPU even speed it up?
[5:24] * omar_m (~omar_m@cpe-72-182-46-23.austin.res.rr.com) Quit (Remote host closed the connection)
[5:25] <kefu> yes, that's possible, but as we known, the benchmark and ROI is always closely related to the usecase or the IO pattern.
[5:25] * lalatenduM (~lalatendu@122.172.69.180) has joined #ceph
[5:25] <Aid2> Yes agreed. Thanks
[5:26] <kefu> Aid2: i am not sure i have the answer... too many variables in the equation.
[5:26] <Aid2> I guess there's more research on my end :)
[5:28] <kefu> sorry, i can not be more helpful, but maybe you can drop a mail to ceph-user for more insights on this in case some guy has done some interesting experiment in this area already.
[5:28] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[5:29] <Aid2> I understand there's no easy answer.. that's what makes this fun though
[5:29] * Izanagi (~Qiasfah@2WVAABAK4.tor-irc.dnsbl.oftc.net) Quit ()
[5:29] * dontron (~Freddy@37.187.129.166) has joined #ceph
[5:30] <kefu> Aid2: yeah, agreed. =)
[5:34] * Vacuum_ (~vovo@i59F798A1.versanet.de) has joined #ceph
[5:41] * Vacuum (~vovo@i59F79424.versanet.de) Quit (Ping timeout: 480 seconds)
[5:45] * jeff1 (~oftc-webi@pool-173-79-247-201.washdc.fios.verizon.net) Quit (Remote host closed the connection)
[5:45] * hellertime (~Adium@pool-173-48-154-80.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[5:46] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit ()
[5:50] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:50] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[5:52] * lalatenduM (~lalatendu@122.172.69.180) Quit (Ping timeout: 480 seconds)
[5:52] * yanzheng (~zhyan@171.216.95.48) Quit (Quit: ??????)
[5:53] * kefu (~kefu@114.92.108.72) Quit (Quit: Textual IRC Client: www.textualapp.com)
[5:54] * Aid2 (491aa3ec@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[5:54] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[5:57] * Henry (~oftc-webi@45.62.104.34.16clouds.com) Quit (Quit: Page closed)
[5:59] * dontron (~Freddy@5NZAABCYM.tor-irc.dnsbl.oftc.net) Quit ()
[5:59] * Yopi (~clarjon1@176.106.54.54) has joined #ceph
[5:59] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[6:00] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[6:06] * kefu (~kefu@114.92.108.72) has joined #ceph
[6:10] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[6:10] * kefu (~kefu@114.92.108.72) Quit (Max SendQ exceeded)
[6:12] * kefu (~kefu@114.92.108.72) has joined #ceph
[6:29] * Yopi (~clarjon1@2WVAABANZ.tor-irc.dnsbl.oftc.net) Quit ()
[6:29] * Maariu5_ (~spidu_@chulak.enn.lu) has joined #ceph
[6:32] * rdas (~rdas@121.244.87.116) has joined #ceph
[6:39] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:58] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:59] * Maariu5_ (~spidu_@5NZAABC07.tor-irc.dnsbl.oftc.net) Quit ()
[7:02] * nils______ (~nils@doomstreet.collins.kg) has joined #ceph
[7:04] * Dragonshadow (~Revo84@assk2.torservers.net) has joined #ceph
[7:07] * jwilkins (~jwilkins@2601:9:4580:f4c:ea2a:eaff:fe08:3f1d) has joined #ceph
[7:09] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[7:12] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:22] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Quit: segutier)
[7:27] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[7:27] * vbellur (~vijay@122.167.25.98) Quit (Ping timeout: 480 seconds)
[7:30] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[7:33] * nils______ is now known as nils_
[7:33] * Dragonshadow (~Revo84@5NZAABC2U.tor-irc.dnsbl.oftc.net) Quit ()
[7:34] * Teddybareman (~RaidSoft@425AAAIQW.tor-irc.dnsbl.oftc.net) has joined #ceph
[7:41] * zack_dol_ (~textual@pw126254002144.8.panda-world.ne.jp) has joined #ceph
[7:42] * bkopilov (~bkopilov@bzq-79-178-48-220.red.bezeqint.net) Quit (Read error: No route to host)
[7:42] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[7:43] * bkopilov (~bkopilov@bzq-79-178-48-220.red.bezeqint.net) has joined #ceph
[7:45] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:45] * zack_dol_ (~textual@pw126254002144.8.panda-world.ne.jp) Quit (Read error: Connection reset by peer)
[7:45] * sleinen1 (~Adium@2001:620:0:82::103) has joined #ceph
[7:45] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:45] * zack_dol_ (~textual@nfmv001073206.uqw.ppp.infoweb.ne.jp) has joined #ceph
[7:46] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[7:47] * zack_dolby (~textual@nfmv001073206.uqw.ppp.infoweb.ne.jp) Quit (Read error: Connection reset by peer)
[7:49] * bkopilov (~bkopilov@bzq-79-178-48-220.red.bezeqint.net) Quit (Read error: Connection reset by peer)
[7:50] * bkopilov (~bkopilov@bzq-79-178-48-220.red.bezeqint.net) has joined #ceph
[7:50] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:54] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[7:54] * bkopilov (~bkopilov@bzq-79-178-48-220.red.bezeqint.net) Quit (Read error: No route to host)
[7:55] * bkopilov (~bkopilov@bzq-79-178-48-220.red.bezeqint.net) has joined #ceph
[7:58] * t4nk306 (~oftc-webi@124.205.174.146) has joined #ceph
[7:59] <t4nk306> test
[8:03] * Teddybareman (~RaidSoft@425AAAIQW.tor-irc.dnsbl.oftc.net) Quit ()
[8:04] * KristopherBel (~Spessu@madiba.guilhem.org) has joined #ceph
[8:04] * sleinen1 (~Adium@2001:620:0:82::103) Quit (Ping timeout: 480 seconds)
[8:06] * vbellur (~vijay@121.244.87.117) has joined #ceph
[8:11] * super_Wang (~oftc-webi@124.205.174.146) has joined #ceph
[8:12] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[8:13] * t4nk306 (~oftc-webi@124.205.174.146) Quit (Quit: Page closed)
[8:13] * super_Wang (~oftc-webi@124.205.174.146) Quit ()
[8:13] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[8:14] * super_Wang_ (~oftc-webi@124.205.174.146) has joined #ceph
[8:15] * karnan_ (~karnan@121.244.87.117) has joined #ceph
[8:15] * sjmtest (uid32746@id-32746.uxbridge.irccloud.com) Quit (Quit: Connection closed for inactivity)
[8:16] * davidz (~davidz@2605:e000:1313:8003:20ff:2601:79c6:a5c6) Quit (Quit: Leaving.)
[8:19] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[8:20] * sleinen (~Adium@2001:620:0:30:7ed1:c3ff:fedc:3223) has joined #ceph
[8:26] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) has joined #ceph
[8:30] * overclk (~overclk@121.244.87.117) has joined #ceph
[8:33] <cetex> hm. ceph-osd is creating insane amounts of tcp connections:
[8:33] <cetex> root@s2:~# ss -antp | wc -l
[8:33] <cetex> 131306
[8:33] <cetex> root@s2:~#
[8:33] * KristopherBel (~Spessu@98EAAA2GC.tor-irc.dnsbl.oftc.net) Quit ()
[8:34] * HoboPickle (~Defaultti@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[8:34] <cetex> anyone has any input on why?
[8:35] * sbfox (~Adium@S0106c46e1fb849db.vf.shawcable.net) Quit (Quit: Leaving.)
[8:35] * MACscr1 (~Adium@2601:d:c800:de3:6451:85bc:3b8:542b) Quit (Quit: Leaving.)
[8:36] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[8:39] * Nacer (~Nacer@2001:41d0:fe82:7200:e5c5:80b:29e9:84f6) Quit (Remote host closed the connection)
[8:44] * thomnico (~thomnico@2a01:e35:8b41:120:219d:3899:3576:b26c) has joined #ceph
[8:49] * Concubidated1 (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[8:53] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[8:53] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[8:55] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:55] <Be-El> hi
[8:56] * macjack (~macjack@122.146.93.152) has joined #ceph
[8:58] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[8:59] * karnan_ (~karnan@121.244.87.117) Quit (Quit: Leaving)
[9:03] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:03] * HoboPickle (~Defaultti@98EAAA2HB.tor-irc.dnsbl.oftc.net) Quit ()
[9:06] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[9:06] * karnan (~karnan@121.244.87.117) Quit (Quit: Leaving)
[9:06] * karnan (~karnan@121.244.87.117) has joined #ceph
[9:07] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[9:08] * storage (~kiasyn@5NZAABC9L.tor-irc.dnsbl.oftc.net) has joined #ceph
[9:08] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:11] * sleinen1 (~Adium@2001:620:0:82::104) has joined #ceph
[9:11] * sleinen (~Adium@2001:620:0:30:7ed1:c3ff:fedc:3223) Quit (Read error: Connection reset by peer)
[9:13] * analbeard (~shw@support.memset.com) has joined #ceph
[9:14] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[9:15] <Kvisle> is it possible to take snapshots of the object gateway-pools, and have read-only access through the object gateway? let's say I take daily snapshots - is it feasible to point an instance of the object gateway to the snapshot pools?
[9:19] * bgleb (~bgleb@2a02:6b8:0:81f::40) has joined #ceph
[9:19] * bkopilov (~bkopilov@bzq-79-178-48-220.red.bezeqint.net) Quit (Read error: Connection reset by peer)
[9:20] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[9:20] * Concubidated (~Adium@71.21.5.251) Quit ()
[9:30] <anorak> hi
[9:31] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[9:31] * rdas (~rdas@121.244.87.116) Quit (Ping timeout: 480 seconds)
[9:33] * fsimonce (~simon@host178-188-dynamic.26-79-r.retail.telecomitalia.it) has joined #ceph
[9:38] * storage (~kiasyn@5NZAABC9L.tor-irc.dnsbl.oftc.net) Quit ()
[9:44] * cok (~chk@2a02:2350:18:1010:4df:e7ac:4837:5e99) has joined #ceph
[9:46] * oro (~oro@80-219-254-208.dclient.hispeed.ch) Quit (Remote host closed the connection)
[9:51] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[9:51] * Concubidated (~Adium@71.21.5.251) Quit ()
[9:52] * Aid2 (491aa3ec@107.161.19.109) has joined #ceph
[9:54] * Aid2 (491aa3ec@107.161.19.109) Quit ()
[9:56] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[9:58] * i_m (~ivan.miro@deibp9eh1--blueice1n2.emea.ibm.com) has joined #ceph
[10:00] * tobiash (~quassel@mail.bmw-carit.de) Quit (Remote host closed the connection)
[10:00] * tobiash (~quassel@mail.bmw-carit.de) has joined #ceph
[10:02] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[10:08] * LRWerewolf (~Kurimus@static-ip-85-25-103-119.inaddr.ip-pool.com) has joined #ceph
[10:09] * kavanagh (~androirc@78-32-127-104.static.enta.net) has joined #ceph
[10:10] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[10:10] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:14] * tobiash (~quassel@mail.bmw-carit.de) Quit (Remote host closed the connection)
[10:18] * tobiash (~quassel@mail.bmw-carit.de) has joined #ceph
[10:22] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[10:22] * Concubidated (~Adium@71.21.5.251) Quit ()
[10:27] * kavanagh (~androirc@78-32-127-104.static.enta.net) Quit (Ping timeout: 480 seconds)
[10:28] * mourgaya (~mourgaya@80.124.164.139) has joined #ceph
[10:35] * ksperis (~ksperis@46.218.42.103) has joined #ceph
[10:35] * oro (~oro@2001:620:20:16:44ce:b1f7:521b:6eb6) has joined #ceph
[10:37] * branto (~branto@178-253-130-42.3pp.slovanet.sk) has joined #ceph
[10:38] * LRWerewolf (~Kurimus@98EAAA2KV.tor-irc.dnsbl.oftc.net) Quit ()
[10:38] * jacoo (~DougalJac@h2343030.stratoserver.net) has joined #ceph
[10:39] * ToMiles (~ToMiles@nl13x.mullvad.net) Quit (Quit: leaving)
[10:39] * ChrisNBlum (~ChrisNBlu@178.255.153.117) has joined #ceph
[10:40] * kavanagh (~androirc@78-32-127-104.static.enta.net) has joined #ceph
[10:44] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[10:44] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[10:53] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[10:53] * Concubidated (~Adium@71.21.5.251) Quit ()
[10:55] * rendar (~I@host175-182-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[10:55] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[10:57] * mourgaya (~mourgaya@80.124.164.139) has left #ceph
[11:01] * tobiash (~quassel@mail.bmw-carit.de) Quit (Remote host closed the connection)
[11:02] * MACscr (~Adium@2601:d:c800:de3:6d6d:debf:31ca:fcb7) has joined #ceph
[11:04] * fdmanana (~fdmanana@bl13-135-166.dsl.telepac.pt) has joined #ceph
[11:04] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[11:05] * b0e (~aledermue@213.95.25.82) has joined #ceph
[11:08] * jacoo (~DougalJac@5NZAABDEU.tor-irc.dnsbl.oftc.net) Quit ()
[11:09] * mourgaya (~mourgaya@80.124.164.139) has joined #ceph
[11:09] * mourgaya (~mourgaya@80.124.164.139) has left #ceph
[11:12] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[11:14] * zack_dol_ (~textual@nfmv001073206.uqw.ppp.infoweb.ne.jp) Quit (Ping timeout: 480 seconds)
[11:15] <cetex> hm
[11:15] <cetex> when i switched to btrfs as backend the data and mds pools weren't created
[11:16] <cetex> but when i had ext4 they were
[11:16] <cetex> any idea why?
[11:16] * super_Wang_ (~oftc-webi@124.205.174.146) Quit (Remote host closed the connection)
[11:17] * pcaruana (~pcaruana@nat-pool-brq-t.redhat.com) has joined #ceph
[11:21] * bkopilov (~bkopilov@bzq-79-178-48-220.red.bezeqint.net) has joined #ceph
[11:22] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:24] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[11:24] * Concubidated (~Adium@71.21.5.251) Quit ()
[11:26] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[11:31] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[11:31] <cetex> ah. most likely because of the hammer release.
[11:33] * Hemanth (~Hemanth@121.244.87.118) has joined #ceph
[11:37] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[11:38] * hassifa (~Kurimus@dreamatorium.badexample.net) has joined #ceph
[11:39] * haigang (~haigang@180.166.129.186) has joined #ceph
[11:43] * rdas (~rdas@121.244.87.116) has joined #ceph
[11:48] * alexxy[home] (~alexxy@2001:470:1f14:106::2) Quit (Remote host closed the connection)
[11:49] * rakanbakir (~rakanbaki@178.20.186.194) has joined #ceph
[11:49] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[11:51] * kavanagh (~androirc@78-32-127-104.static.enta.net) Quit (Remote host closed the connection)
[11:55] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[11:55] * Concubidated (~Adium@71.21.5.251) Quit ()
[12:04] * shyu (~Shanzhi@119.254.196.66) Quit (Ping timeout: 480 seconds)
[12:07] * cok (~chk@2a02:2350:18:1010:4df:e7ac:4837:5e99) Quit (Quit: Leaving.)
[12:08] * hassifa (~Kurimus@5NZAABDHO.tor-irc.dnsbl.oftc.net) Quit ()
[12:09] * antoine (~bourgault@192.93.37.4) Quit (Ping timeout: 480 seconds)
[12:11] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[12:11] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[12:13] * w2k (~qable@static-ip-209-126-110-112.inaddr.ip-pool.com) has joined #ceph
[12:16] * Hemanth (~Hemanth@121.244.87.118) Quit (Ping timeout: 480 seconds)
[12:18] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[12:18] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[12:18] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:20] * kefu (~kefu@114.92.108.72) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:25] * i_m (~ivan.miro@deibp9eh1--blueice1n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[12:26] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[12:26] * Concubidated (~Adium@71.21.5.251) Quit ()
[12:27] * i_m (~ivan.miro@195.212.22.247) has joined #ceph
[12:28] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[12:35] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[12:39] * i_m (~ivan.miro@195.212.22.247) Quit (Ping timeout: 480 seconds)
[12:39] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[12:41] * shyu (~Shanzhi@119.254.196.66) Quit (Remote host closed the connection)
[12:41] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[12:42] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[12:43] * w2k (~qable@98EAAA2OL.tor-irc.dnsbl.oftc.net) Quit ()
[12:43] * Ralth (~Helleshin@bolobolo1.torservers.net) has joined #ceph
[12:43] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[12:43] * rakanbakir (~rakanbaki@178.20.186.194) Quit ()
[12:45] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[12:49] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[12:50] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[12:57] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[12:57] * Concubidated (~Adium@71.21.5.251) Quit ()
[12:57] * xcezzz (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) Quit (Quit: Leaving.)
[12:58] * shylesh (~shylesh@121.244.87.124) Quit (Ping timeout: 480 seconds)
[13:00] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[13:00] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[13:07] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[13:08] <jnq> if we change placement rules for a pool with the addition of SSDs for instance, will all data get moved or just new data?
[13:09] * hellertime (~Adium@a23-79-238-10.deploy.static.akamaitechnologies.com) has joined #ceph
[13:13] * Ralth (~Helleshin@5NZAABDKI.tor-irc.dnsbl.oftc.net) Quit ()
[13:16] * rakanbakir (~rakanbaki@178.20.186.194) has joined #ceph
[13:17] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) Quit (Quit: Clap on! , Clap off! Clap@#&$NO CARRIER)
[13:17] * lmb (~lmb@2a02:8109:8100:1d2c:707d:54b7:f5a6:56d2) Quit (Quit: Life++)
[13:17] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[13:18] * renzhi (~renzhi@116.226.74.246) Quit (Ping timeout: 480 seconds)
[13:19] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[13:23] <championofcyrodi> jnq: data will be balanced based on the crush map rules
[13:24] * haigang (~haigang@180.166.129.186) Quit (Quit: ??????)
[13:24] <championofcyrodi> for example, replicas will generally not be placed on the same server if two osds reside on that server. also depending on how many replicas you have. but to answer your question, "Yes. Data will be moved from the other hard drives on to the new one based on how the osd is weighted/reweighted.
[13:25] <championofcyrodi> you can use 'ceph -w' to watch this happen.
[13:25] <championofcyrodi> (after you reweight/add an osd)
[13:26] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[13:26] <T1w> hi
[13:26] * bgleb (~bgleb@2a02:6b8:0:81f::40) Quit (Remote host closed the connection)
[13:26] * bgleb (~bgleb@77.88.2.37-spb.dhcp.yndx.net) has joined #ceph
[13:27] <T1w> apart from the reccomendations on not running multiple metadataservers, what's the status?
[13:27] <T1w> can it be expected to become supported "any time now" or is it still far off?
[13:28] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[13:28] * Concubidated (~Adium@71.21.5.251) Quit ()
[13:28] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:28] <darkfaded> T1w: i only know as far as the hammer release notes go, and that was _not_ an 'any time now', it was a "single mds operation now looks really good"
[13:29] <T1w> darkfaded: ok
[13:29] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[13:29] <T1w> what about failure of a mds
[13:29] <T1w> can I just create a new mds instance on another node and continue where my clients left off without any really gi issues?
[13:30] <T1w> big even
[13:30] * bilco105 is now known as bilco105_
[13:30] <T1w> I expect that there might be some dataloss if the mds disappears without warning
[13:30] <darkfaded> <- shivers. wait for feedback from someone who tested more. i've even been lucky with multi-mds year back
[13:30] <T1w> but is adding a new mds if an old one is not able to return for a few hours a viable option?
[13:34] * bgleb (~bgleb@77.88.2.37-spb.dhcp.yndx.net) Quit (Ping timeout: 480 seconds)
[13:37] * renzhi (~renzhi@116.226.41.179) has joined #ceph
[13:39] <T1w> what are the pitfalls on multiple mds?
[13:44] * bilco105_ is now known as bilco105
[13:46] <jcsp1> T1w: the warning about multiple MDSs is about multiple *active* MDSs. You can have multiple standby MDSs and a single active MDS, and you'll only be using one at a time.
[13:47] <jcsp1> by default, that's what happens when you create more MDS daemons. The "max_mds" setting stays at 1, so you only get one active at a time
[13:47] <T1w> hm, ok
[13:47] <T1w> what happens when a mds disappears from a client viewpoint?
[13:47] * CorneliousJD|AtWork (~utugi____@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[13:48] <T1w> nothing as a standby mds takes over?
[13:48] <jcsp1> MDS daemons are assigned a logical "rank" when they become active. If a client was talking to an MDS acting as rank 0, and that MDS fails, then the client will patiently wait for another MDS to come up and act as rank 0. The client finds out about the new MDS via the mon cluster.
[13:49] <jcsp1> from the point of view of someone *using* the filesystem, they will see their metadata operations block for a short time (data operations to open files will mostly continue uninterrupted)
[13:49] <T1w> ah, okay
[13:49] <T1w> afk for 5 mins..
[13:49] * Hemanth (~Hemanth@121.244.87.117) Quit (Quit: Leaving)
[13:50] * renzhi (~renzhi@116.226.41.179) Quit (Ping timeout: 480 seconds)
[13:55] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[13:56] * bgleb (~bgleb@2a02:6b8:0:c33::13) has joined #ceph
[13:56] <T1w> how is writes to the same file in a CephFS from different clients at the same time handled?
[13:56] <T1w> are even
[13:59] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[13:59] * calvinx (~calvin@103.7.202.198) has joined #ceph
[13:59] * renzhi (~renzhi@116.226.37.77) has joined #ceph
[13:59] * Concubidated (~Adium@71.21.5.251) Quit ()
[14:02] * ganders (~root@190.2.42.21) has joined #ceph
[14:02] * bgleb (~bgleb@2a02:6b8:0:c33::13) Quit (Remote host closed the connection)
[14:03] * bgleb (~bgleb@94.19.146.224) has joined #ceph
[14:05] <jcsp1> T1w: if two clients are trying to write, they will take it in turns to gain a lock on the file (the lock is handed out by the MDS)
[14:06] <jcsp1> unlike some filesystems (especially Lustre), cephfs doesn't do extent-based locking within a single file. Historically there was an O_LAZY flag that let you do your own locking at the application level instead
[14:07] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:07] * tupper_ (~tcole@rtp-isp-nat1.cisco.com) Quit (Read error: Connection reset by peer)
[14:11] * bgleb (~bgleb@94.19.146.224) Quit (Ping timeout: 480 seconds)
[14:14] * jdillaman (~jdillaman@pool-173-66-110-250.washdc.fios.verizon.net) has joined #ceph
[14:17] * CorneliousJD|AtWork (~utugi____@98EAAA2RZ.tor-irc.dnsbl.oftc.net) Quit ()
[14:17] * Pirate (~xolotl@exit1.ipredator.se) has joined #ceph
[14:17] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:18] * rakanbakir (~rakanbaki@178.20.186.194) Quit ()
[14:20] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:e44d:a828:c1ad:2d75) Quit (Quit: Leaving.)
[14:24] * KevinPerks (~Adium@cpe-75-177-32-14.triad.res.rr.com) has joined #ceph
[14:24] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) has joined #ceph
[14:28] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[14:30] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[14:30] * Concubidated (~Adium@71.21.5.251) Quit ()
[14:33] * vbellur (~vijay@121.244.87.124) has joined #ceph
[14:37] * sleinen (~Adium@macsl.switch.ch) has joined #ceph
[14:38] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) has joined #ceph
[14:38] * wschulze (~wschulze@cpe-74-73-11-233.nyc.res.rr.com) has joined #ceph
[14:39] * sleinen1 (~Adium@2001:620:0:82::104) Quit (Read error: Connection reset by peer)
[14:39] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[14:40] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[14:42] <T1w> jcsp1: ok - so it's just baisc locking
[14:43] <T1w> since data is stored in 4MB chuncks what about writing to different parts of the same file at the same time? not supported?
[14:46] <jcsp1> it's supported for applications to send the writes at the same time, it's just that they will get implicitly serialized inside cephfs because the locking as at file granularity, not object granularity.
[14:47] * Pirate (~xolotl@2WVAABBDE.tor-irc.dnsbl.oftc.net) Quit ()
[14:47] <jcsp1> What's your use case? Usually it's HPC workloads that have many clients writing to the same file.
[14:50] <T1w> ok
[14:50] <T1w> well.. no use case at the moment
[14:51] <T1w> I'm just looking at ceph and trying to see if it is usable for us
[14:51] <T1w> I'm just brainstorming and reading docs/wiki etc etc and asking questions
[14:52] * visored (~Snowman@dreamatorium.badexample.net) has joined #ceph
[14:52] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) Quit (Quit: Leaving)
[14:52] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[14:53] <cetex> i'm having trouble getting rocksdb running.
[14:53] <cetex> and leveldb.
[14:53] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[14:54] <cetex> it recognizes enable_experimental_unrecoverable_data_corrupting_features = keyvaluestore
[14:54] <cetex> and i've set keyvaluestore_backend = leveldb
[14:55] <cetex> but it seems weird, i get folders like /data/2/ceph/current/0.18_head that contains __head_00000018__0
[14:55] <cetex> and the logs says btrfsfilebackend
[14:55] * Egyptian[Laptop] (~marafa@cpe-98-26-77-230.nc.res.rr.com) has joined #ceph
[15:01] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[15:01] * Concubidated (~Adium@71.21.5.251) Quit ()
[15:03] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[15:04] * i_m (~ivan.miro@deibp9eh1--blueice4n2.emea.ibm.com) has joined #ceph
[15:06] <T1w> what's the difference between using rdb to create a cehpfs or using ceph osd pool create commands?
[15:06] <T1w> .. along with ceph fs
[15:08] <jcsp1> that question doesn't really make sense. RBD and CephFS are separate things. They both sit on top of RADOS, they don't sit on top of each other.
[15:08] <T1w> ah
[15:08] <T1w> ok
[15:09] * delattec (~cdelatte@174.96.107.132) Quit (Ping timeout: 480 seconds)
[15:10] * xcezzz (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) has joined #ceph
[15:11] <T1w> it dawns..
[15:11] <T1w> damn
[15:11] <T1w> okay then..
[15:13] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[15:15] * tupper_ (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) has joined #ceph
[15:15] <T1w> if I just want a ceph-backed filesystem to store some data from a bunch for clients and allow each client to read/write files from other clients
[15:15] <T1w> what are the pros/cons us using rdb or cephFS?
[15:15] <T1w> us = of
[15:15] <T1w> (damn, my typing is bad today.. sorry) :)
[15:16] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:20] * linuxkidd (~linuxkidd@vpngac.ccur.com) has joined #ceph
[15:20] <T1w> can I assume that if I use RDB I'm responsible for creating enough inodes, choosing the right filesystem and handling fsck etc etc
[15:20] * harold (~hamiller@71-94-227-66.dhcp.mdfd.or.charter.com) has joined #ceph
[15:21] <T1w> and that if I use DephFS I just pour data into it knowing that it scales correctly?
[15:22] * visored (~Snowman@5NZAABDRM.tor-irc.dnsbl.oftc.net) Quit ()
[15:22] <T1w> oh..
[15:22] <T1w> a block device can only be mounted on one client at a time, while CephFS can be used by several clients at once
[15:24] <T1w> stupid me
[15:25] * rdas (~rdas@121.244.87.116) Quit (Ping timeout: 480 seconds)
[15:31] * harold (~hamiller@71-94-227-66.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[15:32] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[15:32] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[15:32] * b0e (~aledermue@213.95.25.82) has joined #ceph
[15:32] * Concubidated (~Adium@71.21.5.251) Quit ()
[15:33] * b0e (~aledermue@213.95.25.82) Quit ()
[15:33] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[15:33] * b0e (~aledermue@213.95.25.82) has joined #ceph
[15:35] * rdas (~rdas@121.244.87.116) has joined #ceph
[15:37] * zhaochao (~zhaochao@111.161.77.236) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 31.6.0/20150331233809])
[15:37] * zhaochao (~zhaochao@111.161.77.236) has joined #ceph
[15:37] * zhaochao (~zhaochao@111.161.77.236) Quit ()
[15:38] * zhaochao (~zhaochao@111.161.77.236) has joined #ceph
[15:39] * zhaochao (~zhaochao@111.161.77.236) Quit ()
[15:40] * cok (~chk@2a02:2350:18:1010:280c:cd17:9d5f:ade7) has joined #ceph
[15:46] <MaZ-> hrm... where to bug report something for radosgw-agent?
[15:47] * yghannam (~yghannam@0001f8aa.user.oftc.net) has joined #ceph
[15:52] * Plesioth (~capitalth@37.187.129.166) has joined #ceph
[15:52] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) has joined #ceph
[15:53] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[15:53] <jcsp1> MaZ-: http://tracker.ceph.com/projects/rgw/issues
[15:54] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[15:57] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[15:57] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[15:58] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[15:58] <MaZ-> jcsp1: cheers
[16:01] * lalatenduM (~lalatendu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[16:03] * PerlStalker (~PerlStalk@162.220.127.20) has joined #ceph
[16:03] * ChrisNBl_ (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[16:03] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[16:03] * Concubidated (~Adium@71.21.5.251) Quit ()
[16:04] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:07] * ChrisNBl_ (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit ()
[16:09] * shylesh (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[16:10] * vbellur (~vijay@121.244.87.117) has joined #ceph
[16:10] * ChrisNBlum (~ChrisNBlu@178.255.153.117) Quit (Ping timeout: 480 seconds)
[16:12] * lalatenduM (~lalatendu@121.244.87.124) has joined #ceph
[16:14] * DV__ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[16:14] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[16:14] * omar_m (~omar_m@209.163.140.194) has joined #ceph
[16:15] * omar_m (~omar_m@209.163.140.194) Quit (Remote host closed the connection)
[16:15] * omar_m (~omar_m@209.163.140.194) has joined #ceph
[16:17] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[16:17] * pcaruana (~pcaruana@nat-pool-brq-t.redhat.com) Quit (Quit: Leaving)
[16:18] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[16:19] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[16:21] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[16:22] * Plesioth (~capitalth@2WVAABBJE.tor-irc.dnsbl.oftc.net) Quit ()
[16:22] * Random (~Epi@176.10.99.206) has joined #ceph
[16:25] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[16:25] * zw (~wesley@spider.pfoe.be) has joined #ceph
[16:27] <zw> Hi. I have a 4 server cluster to test with. I have 16 osd's. When I pull a disk from the 3rd server it does not get marked as down when running ceph osd tree ?
[16:27] <zw> It does dissapear when running ceph-disk list on the node itself
[16:27] <zw> We are running Hammer
[16:27] <zw> v0.94
[16:29] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Quit: Leaving.)
[16:30] * wushudoin (~wushudoin@209.132.181.86) has joined #ceph
[16:32] <seapasulli> do you have any special flags still set? ceph osd unset nodown; ceph osd unset noout ?
[16:34] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[16:34] * Concubidated (~Adium@71.21.5.251) Quit ()
[16:37] <zw> seapasulli: no
[16:37] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[16:38] <zw> seapasulli: yes, it was that. What are those flags ?
[16:39] <zw> Many thanks
[16:41] * shaunm (~shaunm@74.215.76.114) Quit (Quit: Ex-Chat)
[16:43] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[16:50] * kefu (~kefu@114.92.99.163) has joined #ceph
[16:52] * Random (~Epi@425AAAIXV.tor-irc.dnsbl.oftc.net) Quit ()
[16:53] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:53] <rkeene> Heh, no... yes ? :-P
[16:54] * DV__ (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[16:57] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[16:57] <zw> rkeene: the flag was set indeed :)
[16:57] * puffy (~puffy@50.185.218.255) has joined #ceph
[16:58] <rkeene> But you said it wasn't ! :-)
[16:58] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[17:01] * elder (~elder@50.250.13.174) has joined #ceph
[17:01] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:05] * analbeard (~shw@support.memset.com) has joined #ceph
[17:05] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[17:05] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[17:05] * analbeard (~shw@support.memset.com) Quit ()
[17:05] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[17:05] * analbeard (~shw@support.memset.com) has joined #ceph
[17:05] * Concubidated (~Adium@71.21.5.251) Quit ()
[17:06] * analbeard (~shw@support.memset.com) has left #ceph
[17:10] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:10] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[17:12] * harold (~hamiller@71-94-227-66.dhcp.mdfd.or.charter.com) has joined #ceph
[17:13] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[17:14] * thomnico (~thomnico@2a01:e35:8b41:120:219d:3899:3576:b26c) Quit (Ping timeout: 480 seconds)
[17:15] * bandrus (~brian@128.sub-70-211-79.myvzw.com) has joined #ceph
[17:19] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[17:21] * cok (~chk@2a02:2350:18:1010:280c:cd17:9d5f:ade7) Quit (Quit: Leaving.)
[17:22] * FierceForm (~cmrn@exit1.ipredator.se) has joined #ceph
[17:27] * calvinx (~calvin@103.7.202.198) Quit (Quit: calvinx)
[17:29] * daniel2_ (~daniel2_@209.163.140.194) has joined #ceph
[17:31] * zw (~wesley@spider.pfoe.be) Quit (Quit: leaving)
[17:32] * joef (~Adium@12.250.139.94) has joined #ceph
[17:32] * joef (~Adium@12.250.139.94) has left #ceph
[17:34] * Sysadmin88 (~IceChat77@054527d3.skybroadband.com) has joined #ceph
[17:41] * joef1 (~Adium@2620:79:0:2420::11) has joined #ceph
[17:41] * xcezzz (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) Quit (Read error: Connection reset by peer)
[17:41] * xcezzz (~Adium@pool-100-3-14-19.tampfl.fios.verizon.net) has joined #ceph
[17:42] * Mika_c (~quassel@59-115-152-246.dynamic.hinet.net) has joined #ceph
[17:45] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[17:46] * sleinen (~Adium@macsl.switch.ch) Quit (Ping timeout: 480 seconds)
[17:48] * B_Rake (~B_Rake@69-195-66-67.unifiedlayer.com) has joined #ceph
[17:48] * joef1 (~Adium@2620:79:0:2420::11) Quit (Read error: Connection reset by peer)
[17:49] * oro (~oro@2001:620:20:16:44ce:b1f7:521b:6eb6) Quit (Ping timeout: 480 seconds)
[17:50] * joef (~Adium@2620:79:0:2420::11) has joined #ceph
[17:51] * joef (~Adium@2620:79:0:2420::11) has left #ceph
[17:52] * FierceForm (~cmrn@2FBAAA92F.tor-irc.dnsbl.oftc.net) Quit ()
[17:52] * K3NT1S_aw (~ItsCrimin@spftor4e1.privacyfoundation.ch) has joined #ceph
[17:54] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[17:55] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[17:55] * joef1 (~Adium@2620:79:0:2420::11) has joined #ceph
[17:59] * Concubidated (~Adium@71.21.5.251) Quit (Read error: Connection reset by peer)
[17:59] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[17:59] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[18:00] * daniel2_ (~daniel2_@209.163.140.194) Quit (Read error: Connection reset by peer)
[18:02] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[18:03] * tk12 (~tk12@68.140.239.132) has joined #ceph
[18:04] * vbellur (~vijay@122.167.25.98) has joined #ceph
[18:04] * daniel2_ (~daniel2_@209.163.140.194) has joined #ceph
[18:05] * thomnico (~thomnico@2a01:e35:8b41:120:5101:80f1:b96c:b12d) has joined #ceph
[18:06] * rastro (~rastro@68.140.239.132) has joined #ceph
[18:10] * joef1 (~Adium@2620:79:0:2420::11) has left #ceph
[18:10] * shaunm (~shaunm@74.215.76.114) Quit (Ping timeout: 480 seconds)
[18:14] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:17] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) Quit (Quit: Leaving.)
[18:19] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) has joined #ceph
[18:19] * p66kumar (~p66kumar@74.119.205.248) has joined #ceph
[18:21] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[18:22] * K3NT1S_aw (~ItsCrimin@98EAAA22N.tor-irc.dnsbl.oftc.net) Quit ()
[18:22] * hgjhgjh (~Xylios@176.10.99.209) has joined #ceph
[18:27] * ksperis (~ksperis@46.218.42.103) Quit (Remote host closed the connection)
[18:28] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[18:29] * reed (~reed@c-73-15-132-188.hsd1.ca.comcast.net) has joined #ceph
[18:30] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:30] * kefu (~kefu@114.92.99.163) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:34] * reed (~reed@c-73-15-132-188.hsd1.ca.comcast.net) Quit ()
[18:34] * reed (~reed@c-73-15-132-188.hsd1.ca.comcast.net) has joined #ceph
[18:37] * kefu (~kefu@114.92.99.163) has joined #ceph
[18:38] * nils_ (~nils@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[18:38] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:39] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:41] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) has joined #ceph
[18:47] * danieagle (~Daniel@200-148-38-108.dsl.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[18:49] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[18:49] * bene (~ben@c-75-68-96-186.hsd1.nh.comcast.net) has joined #ceph
[18:50] * lalatenduM (~lalatendu@121.244.87.124) Quit (Quit: Leaving)
[18:52] * hgjhgjh (~Xylios@425AAAIZL.tor-irc.dnsbl.oftc.net) Quit ()
[18:52] * _br_ (~nartholli@tor-exit.eecs.umich.edu) has joined #ceph
[18:52] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[18:53] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:53] * davidz (~davidz@2605:e000:1313:8003:a062:66ed:1fb:a16f) has joined #ceph
[18:54] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[18:54] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:55] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:55] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[18:57] * madkiss (~madkiss@2001:6f8:12c3:f00f:40ec:9cb2:84eb:cf3e) has joined #ceph
[19:00] * Mika_c (~quassel@59-115-152-246.dynamic.hinet.net) Quit (Remote host closed the connection)
[19:01] * cdelatte (~cdelatte@2001:1998:2000:101::3f) has joined #ceph
[19:05] * bgleb (~bgleb@94.19.146.224) has joined #ceph
[19:06] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:07] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[19:12] * kefu (~kefu@114.92.99.163) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:12] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) Quit (Quit: Ex-Chat)
[19:13] * delattec (~cdelatte@2606:a000:6e63:4c00:3e15:c2ff:feb8:dff8) has joined #ceph
[19:15] * cdelatte (~cdelatte@2001:1998:2000:101::3f) Quit (Read error: Connection reset by peer)
[19:15] * danieagle (~Daniel@200-148-38-108.dsl.telesp.net.br) has joined #ceph
[19:16] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[19:16] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[19:17] * ndevos_ (~ndevos@nat-pool-ams2-5.redhat.com) Quit (Ping timeout: 480 seconds)
[19:18] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[19:22] * _br_ (~nartholli@2WVAABBUS.tor-irc.dnsbl.oftc.net) Quit ()
[19:22] * Redshift (~basicxman@lumumba.torservers.net) has joined #ceph
[19:22] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[19:23] * sbfox (~Adium@72.2.49.50) has joined #ceph
[19:24] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[19:28] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[19:29] * mykola (~Mikolaj@91.225.201.100) has joined #ceph
[19:30] <cetex> is leveldb / rokcsdb only for metadata? or is it for data as well?
[19:33] <joelm> jorunal afaik?
[19:33] <cetex> hm, ok. but data is stored outside of the db?
[19:33] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[19:34] <joelm> sure, they'll be pointers to the data I imagine, but I doubt putting all tghe OSD dat in level/rocks is a good idea!
[19:34] <cetex> yeah. sounds reasonable.
[19:34] * sherryg (~Adium@12.161.168.178) has joined #ceph
[19:34] <cetex> so still double-commit then?
[19:34] <cetex> *double write
[19:35] <cetex> :)
[19:36] <joelm> why?
[19:36] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[19:36] <cetex> write to journal -> ack to client -> (later) write to file?
[19:36] <joelm> depends if it's using COW I guess, not sure :)_
[19:37] <cetex> that's something i'd like to know..
[19:37] <cetex> :>
[19:37] <joelm> should be in the docs I'd imagine :)
[19:37] <cetex> can't find anything
[19:37] <cetex> can barely find anything about leveldb or rocksdb
[19:38] <joelm> http://ceph.com/docs/master/architecture/
[19:38] <joelm> sure that might have some stuff in :)
[19:41] * achieva (ZISN2.9G@foresee.postech.ac.kr) has joined #ceph
[19:41] <achieva> Hi
[19:42] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[19:43] <jwilkins> cetex: I believe there was some experimentation to that end. The interface, as I understand it is object store which specializes into file store. There are other cases like key-value store that were tested out, but if I understand it correctly, there is going to be a new interface. It was labelled as experimental in the code, so I didn't publish docs for this.
[19:43] * kingcu (~kingcu@kona.ridewithgps.com) Quit (Quit: leaving)
[19:43] * thomnico (~thomnico@2a01:e35:8b41:120:5101:80f1:b96c:b12d) Quit (Ping timeout: 480 seconds)
[19:44] * bgleb (~bgleb@94.19.146.224) Quit (Remote host closed the connection)
[19:45] * thomnico (~thomnico@2a01:e35:8b41:120:5101:80f1:b96c:b12d) has joined #ceph
[19:46] * bgleb (~bgleb@2a02:6b8:0:6::2e) has joined #ceph
[19:46] * bgleb (~bgleb@2a02:6b8:0:6::2e) Quit (Remote host closed the connection)
[19:47] <sherryg> does anyone know the versions of aws sdk that are compatible with ceph object gateway (version 0.84) s3 api?
[19:47] * harold (~hamiller@71-94-227-66.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[19:47] <achieva> I've tried to evaluate a software overhead of Ceph. To remove network overheads, I use a single OSD to build an entire storage cluster. (OSD and monitor are in the same node.). After then, using rbd to test block device with a fio benchmark. Unfortunately, OSD is continuously aborted (only testing 4K random write case).
[19:48] <achieva> I don't know why the only 4K random write case fail to run on the ceph.
[19:49] <achieva> Doesn't ceph support 4K random write?
[19:51] <achieva> [http://pastebin.com/zkFWWF2A] This is my testing parameters and my configuration file.
[19:52] * Redshift (~basicxman@5NZAABEAQ.tor-irc.dnsbl.oftc.net) Quit ()
[19:52] <achieva> BTW, I cannot find a dumped core file (generated by <ceph-osd> Aborted (core dumped)).
[19:52] <achieva> I set "ulimit -c unlimited". I don't know where the core file is generated :( orz
[19:53] <cetex> joelm: thanks. reading. i've skimmed through that page earlier when trying to set ceph up in the beginning.
[19:54] <cetex> jwilkins: ok. i found a thread on the mailinglist about "newstore" i'd like to test to see what the future has to offer. any more info on these things?
[19:54] * brutuscat (~brutuscat@93.Red-88-1-121.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:56] <cetex> we have a plan to setup a larger cluster and we need something like ceph or hdfs to manage this, but it feels like we're getting a bit too low throughput currently.
[19:57] <cetex> 9 drives gives around 100-120MB/sec. write speeds rarely (though it happpens) go below 50MB/s
[19:59] * LeaChim (~LeaChim@host86-151-147-249.range86-151.btcentralplus.com) has joined #ceph
[19:59] <cetex> so, one drive per server, 9 servers, no journaldrive since we can't fit it into the machine, server currently has rouhly 2Gbit of connectivity (in lacp so one tcp session can push 1Gbit, multiple sessions can make use of up to 2Gbit), 24-48GB of ram and 2x 6core xeon cpu's per machine.
[20:00] <cetex> it feels like it should be possible to optimize this further to get closer to 200MB/sec in total throughput over these 9 nodes, but i maybe wrong about that.
[20:01] * BManojlovic (~steki@cable-89-216-192-26.dynamic.sbb.rs) has joined #ceph
[20:02] * joef (~Adium@2620:79:0:2420::11) has joined #ceph
[20:03] * joef (~Adium@2620:79:0:2420::11) has left #ceph
[20:03] * p66kumar (~p66kumar@74.119.205.248) Quit (Quit: p66kumar)
[20:04] <cetex> the drives are "enterprisier" cheap 3.5" 7.2krpm sata drives, pushes 98MB/sec with dd using oflag=direct
[20:04] <cetex> :>
[20:05] * lalatenduM (~lalatendu@122.171.115.251) has joined #ceph
[20:06] <cetex> any thoughts on this?
[20:07] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[20:07] * i_m (~ivan.miro@deibp9eh1--blueice4n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[20:10] <achieva> hmm... Where is a location of core file (generated from ceph-osd)?
[20:11] * yehudasa_ (~yehudasa@2607:f298:a:607:ec72:dc25:4408:b78e) Quit (Remote host closed the connection)
[20:11] * sbfox (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[20:11] <Kupo1> Hey All, I seem to be getting tons of fault messages ever since upgrading to .94 https://pastebin.mozilla.org/8829455
[20:11] <Kupo1> are there any known issues?
[20:13] * Steki (~steki@198.199.65.141) has joined #ceph
[20:14] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Read error: Connection reset by peer)
[20:15] * yehudasa (~yehudasa@2607:f298:a:607:bc7a:d8c8:8a1d:c4a2) has joined #ceph
[20:15] <debian112> what's the best way to empty/clear out a pool without deleting it?
[20:17] * BManojlovic (~steki@cable-89-216-192-26.dynamic.sbb.rs) Quit (Ping timeout: 480 seconds)
[20:17] <seapasulli> without deleting the pool? Delete all of the objects inside would probably be the best. Depends on what pool it is. RGW if you use rados to delete the .rgw.buckets objects it will probably not work right :)
[20:18] <seapasulli> why not delete the pool and recreate it debian112 ?
[20:19] * tk12 (~tk12@68.140.239.132) Quit (Quit: Leaving...)
[20:19] <debian112> that's what I said, checking for our dev guys
[20:19] <debian112> I can just wack it
[20:20] * segutier (~segutier@172.56.17.227) has joined #ceph
[20:20] * thomnico (~thomnico@2a01:e35:8b41:120:5101:80f1:b96c:b12d) Quit (Quit: Ex-Chat)
[20:22] * Spikey (~jakekosbe@98EAAA278.tor-irc.dnsbl.oftc.net) has joined #ceph
[20:24] <cetex> joelm: nothing about leveldb in there :)
[20:25] <debian112> for i in `rados -p pool_obj ls -`; do rados -p pool_obj rm $i; done
[20:25] <debian112> worked
[20:27] * elder (~elder@50.250.13.174) Quit (Quit: Leaving)
[20:27] * elder (~elder@50.250.13.174) has joined #ceph
[20:28] <debian112> anyone running a MDS?
[20:30] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[20:30] * Steki (~steki@198.199.65.141) Quit (Remote host closed the connection)
[20:32] * sbfox (~Adium@72.2.49.50) has joined #ceph
[20:36] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:37] * Kioob (~Kioob@2a01:e34:ec0a:c0f0:7e7a:91ff:fe3c:6865) has joined #ceph
[20:41] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[20:41] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Quit: Ex-Chat)
[20:43] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Quit: Leaving.)
[20:44] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[20:48] * BManojlovic (~steki@cable-89-216-192-26.dynamic.sbb.rs) has joined #ceph
[20:48] * sbfox (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[20:51] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[20:52] * Spikey (~jakekosbe@98EAAA278.tor-irc.dnsbl.oftc.net) Quit ()
[20:56] * AG_Scott (~straterra@37.187.129.166) has joined #ceph
[21:01] <cetex> so. if i'm thinking about speed correctly
[21:02] <cetex> assumption: rados bench write does sequential writes
[21:02] <cetex> assumption: osd writes journal and later data
[21:02] <cetex> assumption: when writing lots of data continuously the journal gets filled and the osd starts writing data back to files on the fs
[21:03] <cetex> result: roughly / barely 50% of the disks throuhgput is available for new writes when that happens.
[21:03] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[21:04] * sbfox (~Adium@72.2.49.50) has joined #ceph
[21:04] <cetex> i have 9 disks each doing 98MB/s, with some disk thrashing (assumptions again) we end up with somewhere around 75-80MB/s per disk.
[21:05] * sherryg (~Adium@12.161.168.178) Quit (Quit: Leaving.)
[21:05] <cetex> and therefore around 37-40MB in sustained writes from clients.
[21:06] * dupont-y (~dupont-y@2a01:e34:ec92:8070:f036:a06f:495f:db99) has joined #ceph
[21:07] <cetex> 9 disks then means 333MB/sec to 360MB/sec in total write throughput, 3 copies means i end up with at most 111 - 120MB/sec if writing data continuously.
[21:07] * elder (~elder@50.250.13.174) Quit (Read error: Connection reset by peer)
[21:07] <cetex> sounds correct?
[21:07] * elder (~elder@2601:6:7f00:2000:74c1:4740:b92c:eed7) has joined #ceph
[21:12] * p66kumar (~p66kumar@74.119.205.248) has joined #ceph
[21:17] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Quit: Ex-Chat)
[21:17] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[21:18] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[21:21] * branto (~branto@178-253-130-42.3pp.slovanet.sk) has left #ceph
[21:26] * rotbart (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) has joined #ceph
[21:26] * AG_Scott (~straterra@425AAAI1H.tor-irc.dnsbl.oftc.net) Quit ()
[21:27] * segutier_ (~segutier@172.56.17.234) has joined #ceph
[21:28] * jschmid (~jxs@ip9234e338.dynamic.kabel-deutschland.de) has joined #ceph
[21:30] * jschmid (~jxs@ip9234e338.dynamic.kabel-deutschland.de) Quit ()
[21:33] * segutier (~segutier@172.56.17.227) Quit (Ping timeout: 480 seconds)
[21:33] * segutier_ is now known as segutier
[21:33] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[21:34] <achieva> hi
[21:35] <achieva> why ceph-osd id aborted while running 4K random write I/O?
[21:35] <achieva> My ceph setting is (1OSD + 1 MON == single node).
[21:36] <achieva> Even, I cannot find a generated core file (if i got it, i could use for debugging....)
[21:36] <achieva> any idea?
[21:42] * ganders (~root@190.2.42.21) Quit (Quit: WeeChat 0.4.2)
[21:48] * lalatenduM (~lalatendu@122.171.115.251) Quit (Quit: Leaving)
[21:49] * bgleb (~bgleb@2a02:6b8:0:c33::13) has joined #ceph
[21:49] * segutier (~segutier@172.56.17.234) Quit (Ping timeout: 480 seconds)
[21:54] * rendar (~I@host175-182-dynamic.37-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:57] * rendar (~I@host175-182-dynamic.37-79-r.retail.telecomitalia.it) has joined #ceph
[21:57] * elder (~elder@2601:6:7f00:2000:74c1:4740:b92c:eed7) Quit (Quit: Leaving)
[21:58] * elder (~elder@50.250.13.174) has joined #ceph
[22:02] * platypython (~platypyth@208.186.235.4) has joined #ceph
[22:04] * rotbart (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) Quit (Quit: Leaving)
[22:05] <platypython> Greetings! I have a use case question about Ceph. I have 400 terabytes of data stored in 3 billion files across 80 hosts each with 4 drives. How does Ceph perform when storing/recalling billions of tiny files?
[22:05] * tk12 (~tk12@68.140.239.132) has joined #ceph
[22:06] * getup (~textual@r1.sensson.net) has joined #ceph
[22:06] <platypython> We currently shard them onto 12TB raids on EXT4, which has worked, until you need to copy/move a large chuck. The typical linux posix tools get overwhelmed easily with millions of tiny files. I'm hoping that Ceph is built in such a way that replication and copying is performed in a different, more manageable way.
[22:07] <gregsfortytwo> yeah, that wouldn't be much of a problem since it's all using stuff designed for large numbers of objects
[22:07] <getup> When I'm setting up radosgw I get an error: "cannot update region map, master_region conflict". What would I be looking for in the config to fix this?
[22:07] <gregsfortytwo> you'd want to test though; nobody supports CephFS in production environments yet
[22:08] <gregsfortytwo> and depending on workload I'm not entirely confident in its performance with that metadata:data ratio
[22:08] <platypython> Good news is we don't care about metadata at all
[22:09] <gregsfortytwo> well, you care about opening the files, presumably at a reasonable speed ;)
[22:09] <gregsfortytwo> I don't think it should be a problem, just one of the things that might derail it
[22:10] <platypython> Gotcha. Does Ceph have any known object limit?
[22:10] <platypython> Limit to number of objects I mean
[22:10] * ircolle (~ircolle@4.34.49.35) has joined #ceph
[22:10] <gregsfortytwo> nope
[22:11] <platypython> excellent. Thank you
[22:12] <debian112> speaking of CephFS (MDS)
[22:12] <debian112> is anyone running it?
[22:13] <debian112> I think I might have a need for it
[22:13] * shaunm (~shaunm@cpe-65-185-127-82.cinci.res.rr.com) has joined #ceph
[22:14] <debian112> I want to be able to run our docker cluster on ceph.
[22:14] <debian112> instead of local storage
[22:19] * hellertime (~Adium@a23-79-238-10.deploy.static.akamaitechnologies.com) Quit (Quit: Leaving.)
[22:31] * Mraedis (~Doodlepie@37.187.129.166) has joined #ceph
[22:33] * delattec (~cdelatte@2606:a000:6e63:4c00:3e15:c2ff:feb8:dff8) Quit (Quit: This computer has gone to sleep)
[22:34] * bitserker (~toni@77.231.153.71) has joined #ceph
[22:34] * jrankin (~jrankin@209.132.181.86) has joined #ceph
[22:41] * jrankin (~jrankin@209.132.181.86) Quit (Quit: Leaving)
[22:44] * Egyptian[Laptop] (~marafa@cpe-98-26-77-230.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[22:54] * mykola (~Mikolaj@91.225.201.100) Quit (Quit: away)
[22:55] * Egyptian[Laptop] (~marafa@cpe-98-26-77-230.nc.res.rr.com) has joined #ceph
[22:57] <bstillwell> debian112: I'm using cephfs at home. The performance is good enough for my needs (cold storage), but I wouldn't call it quick.
[22:57] <bstillwell> Especially when a rebuild is going on.
[22:58] <bstillwell> On another note, does anyone know how to list watchers on an rbd snapshot?
[22:58] <debian112> bstillwell: thanks. I think I might build two HA NFS head units infront, and connect them to ceph block storage
[22:58] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: Goodbye)
[22:58] <debian112> do the whole NFS active / passive setup
[22:59] <debian112> till Ceph FS in prod ready
[22:59] <bstillwell> debian112: Is this for block storage?
[23:00] <debian112> file storage is what I will like. For a docker cluster
[23:00] * bitserker (~toni@77.231.153.71) Quit (Ping timeout: 480 seconds)
[23:00] <debian112> they all have to share the same storage
[23:00] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:01] * Mraedis (~Doodlepie@25bb81a6.test.dnsbl.oftc.net) Quit ()
[23:01] <bstillwell> I was thinking that OCFS2 on top of RBD would work well for that, but I'm not sure how ceph handles multiple servers mounting the same RBD.
[23:01] <debian112> yeah, that was may next thought
[23:02] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[23:02] * bandrus (~brian@128.sub-70-211-79.myvzw.com) Quit (Quit: Leaving.)
[23:02] <debian112> but didn't want to make it too complicated. RBD and GFS2 or OCFS2 is a great solution for master/master database servers
[23:03] * tupper_ (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) Quit (Ping timeout: 480 seconds)
[23:03] * bandrus (~brian@128.sub-70-211-79.myvzw.com) has joined #ceph
[23:05] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[23:11] * sherryg (~Adium@12.161.168.178) has joined #ceph
[23:13] * guppy (~quassel@2605:6f00:877::691a:35e8) Quit (Quit: guppy)
[23:13] * guppy (~quassel@guppy.xxx) has joined #ceph
[23:14] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[23:28] * shaunm (~shaunm@cpe-65-185-127-82.cinci.res.rr.com) Quit (Ping timeout: 480 seconds)
[23:29] * BManojlovic (~steki@cable-89-216-192-26.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:33] * sherryg (~Adium@12.161.168.178) Quit (Quit: Leaving.)
[23:38] * elder (~elder@50.250.13.174) Quit (Ping timeout: 480 seconds)
[23:48] * ircolle (~ircolle@4.34.49.35) Quit (Ping timeout: 480 seconds)
[23:49] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) Quit (Quit: Leaving.)
[23:50] * jclm (~jclm@ip24-253-45-236.lv.lv.cox.net) has joined #ceph
[23:52] * daniel2_ (~daniel2_@209.163.140.194) Quit (Quit: Textual IRC Client: www.textualapp.com)
[23:54] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[23:57] * p66kumar (~p66kumar@74.119.205.248) Quit (Quit: p66kumar)
[23:58] * sbfox (~Adium@72.2.49.50) Quit (Quit: Leaving.)
[23:59] * bene (~ben@c-75-68-96-186.hsd1.nh.comcast.net) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.