#ceph IRC Log

Index

IRC Log for 2015-08-11

Timestamps are in GMT/BST.

[0:00] * julen (~julen@2001:638:70e:11:1cc9:5acd:fea4:a66c) has joined #ceph
[0:02] <TheSov> the stupid emc people have taken over the storage reddit. every time i mention ceph i get downvoted and "someone" nearly the same guy always mentions scaleio and has already admitted to working at emc
[0:03] <TheSov> not today though i posted several things and 1 topic
[0:06] * segutier_ (~segutier@sfo-vpn1.shawnlower.net) has joined #ceph
[0:10] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[0:10] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[0:10] * segutier (~segutier@sfo-vpn1.shawnlower.net) Quit (Ping timeout: 480 seconds)
[0:10] * segutier_ is now known as segutier
[0:14] * Mousey (~demonspor@5NZAAF8PW.tor-irc.dnsbl.oftc.net) Quit ()
[0:14] * utugi______ (~CoZmicShR@tor2e1.privacyfoundation.ch) has joined #ceph
[0:15] <monsted> i'm not surprised. EMC is the scummiest of scum.
[0:21] * ircolle (~Adium@2601:285:201:2bf9:502c:76f8:4291:9a92) Quit (Quit: Leaving.)
[0:21] <monsted> at our place they kept going up the corporate ladder until someone listened to them, at each level telling them how incompetent we were for not accepting their offer.
[0:21] <TheSov> wow
[0:22] <monsted> their offer included a promise to save us seven people in the storage group. minor problem: there was three of us and we mostly did backups.
[0:23] <TheSov> LOL
[0:23] <TheSov> guys the storage now takes care of itself!
[0:24] <TheSov> we dont even have to provision storage anymore!
[0:24] <TheSov> one time i seriously had a finance guy ask me why we dont just get a rackmount shelf and connect external usb drives to the servers for extra storage
[0:24] <TheSov> not a joke
[0:24] <TheSov> actually happeneed
[0:26] <monsted> worst part was, shortly after we finally got rid of EMC, i was outsourced to a different branch of the company who had just signed a huge deal with EMC :(
[0:26] <sean> to which you replied :: sure thing please sign here?
[0:27] <sean> We have had nothing but problems with black block solutions except for one product:: cleversafe. I am trying to get ceph to out perform cleversafe and it is failing :-(
[0:28] <sean> I think solely due to my inadequacy and laziness
[0:29] * skorgu (skorgu@pylon.skorgu.net) has joined #ceph
[0:30] <hemebond> Am I right in assuming that no one monitors email and approves them?
[0:31] <hemebond> Meaning I have to create an account if I want to send to the mailing list?
[0:33] <TheSov> my friend works for docusign
[0:33] <TheSov> im sure you have heard of them?
[0:33] <TheSov> they dropped emc like a ton of bricks and opted for an in house solution
[0:33] <TheSov> nas's.. tons of them replicated
[0:33] <TheSov> yes folks replicated nas's are better in docusigns eyes than an emc vnx
[0:34] * dolgner (~textual@50-192-42-249-static.hfc.comcastbusiness.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[0:38] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) Quit (Quit: bye!)
[0:38] <monsted> being stabbed in the eye is for the most part better than an emc vnx
[0:39] <TheSov> LOL
[0:40] <TheSov> you know this may sound bad but comparing my storage experiences with equallogic compellent and hp storevirtual. i have to say storevirtual gave me the most peice of mind
[0:40] <TheSov> but i hate java so badly, that im willing to do just about anything to get away from it
[0:40] <TheSov> thats what lead me to ceph
[0:40] <TheSov> NO JAVA
[0:40] <monsted> HDS. HDS all the way.
[0:40] <TheSov> and then i rejoiced!
[0:41] <TheSov> never messed with those, are they java too?
[0:41] <monsted> it was a joyous day when we rolled out the 12 IBM DS4k and DS8k arrays after moving back to HDS
[0:41] <snakamoto> HDS is great, but sooo expensive
[0:41] <snakamoto> does not scale very well either
[0:41] <monsted> snakamoto: same price as the other enterprise arrays
[0:41] * arbrandes (~arbrandes@179.97.155.77) Quit (Quit: Leaving)
[0:42] <TheSov> monsted, the store virtualarrays are 30k each
[0:42] <TheSov> 40TB and support shelf failure
[0:42] <TheSov> does your HDS do that?
[0:42] <monsted> not at 30k :)
[0:42] <TheSov> well their you go
[0:43] <monsted> but a virtualarray is not an enterprise storage array
[0:43] <TheSov> how do you figure?
[0:44] <monsted> the enterprise arrays are the crazy redundant systems with support for stuff like mainframes
[0:44] <TheSov> you install the VSA appliance on a vmware host, by itself and just storage
[0:44] <TheSov> they bond together in a scaleable platform
[0:44] <TheSov> upto 15 machines per cluster
[0:44] <monsted> it's not a question of where it's used, it's just a tier of product
[0:44] * utugi______ (~CoZmicShR@5NZAAF8QS.tor-irc.dnsbl.oftc.net) Quit ()
[0:44] * Azru (~Enikma@tor-exit.gansta93.com) has joined #ceph
[0:44] <TheSov> but the tiering is totally up to you
[0:44] <monsted> not really
[0:45] <TheSov> ???
[0:45] <TheSov> now i am confuse
[0:45] <TheSov> its 10 gig iscsi that supports active/active multipath
[0:45] <TheSov> and again, shelf failure!
[0:45] <TheSov> other than ceph and scaleio and such who out there supports shelf failure
[0:46] <monsted> the classic enterprise array market is the HDS VSP, IBM DS8000 and EMC DMX class of gear
[0:46] <TheSov> well, i dont buy into that logic
[0:46] <monsted> then there's the midrange, which is pretty much everything else
[0:46] <snakamoto> NetApp supports shelf failure - sort of
[0:46] <snakamoto> (mirrored)
[0:46] <TheSov> i consider any gear with enterprise features, speed and support to be enterprise
[0:46] <TheSov> netapp? good luck with that
[0:47] <snakamoto> I consider enterprise storage to be any system where the sales guys lie to you constantly about being able to perform live firmware upgrades.
[0:47] <TheSov> dont get me wrong i think netapp is great, but they have bugs that have made grown men weep like babes in the woods
[0:47] <monsted> TheSov: i would be surprised if you could buy FICON ports for that stuff :)
[0:48] <TheSov> monsted, werlllll you got me there
[0:48] <TheSov> im an iscsi man myself
[0:48] <monsted> snakamoto: we've run HDS arrays for twenty years and never had a single issue with a live upgrade
[0:48] <TheSov> though i deal far too much with FC these days
[0:49] <snakamoto> monsted: you can't do major releases live (we've been lied to a few times about that)
[0:49] <monsted> the HP EVAs on the other hand died constantly
[0:49] <monsted> snakamoto: major releases usually only show up when you're replacing the array, at least for HDS
[0:50] <monsted> (this is both good and bad - it'd be nice to get the shiny new software on older systems, but meh.)
[0:50] <snakamoto> It was always EOSL that bit us. Back in the BlueArc days, it was usually patches they didn't want to backport.
[0:51] <monsted> no idea about the bluearc. hardly an enterprise array :)
[0:51] <snakamoto> ?
[0:51] <snakamoto> BlueArc => HDS
[0:51] <snakamoto> HDS is BlueArc
[0:51] <monsted> i know they bought them.
[0:52] <monsted> but a NAS controller head isn't quite an "enterprise array"
[0:52] <snakamoto> oh, you're talking about like WMS, AMS, and their newer counterparts
[0:52] <monsted> (not that live upgrades on the HDS midrage gear was ever a problem, either)
[0:53] <monsted> snakamoto: those are midrange. enterprise is the VSP.
[0:53] <snakamoto> ahh yeah, we did not have any issues with those. =D
[0:54] <monsted> oh, they're calling everything VSP now. great.
[0:55] <TheSov> damn if only we had native ceph rbd for vmware, do you know how many san vendors that would kill
[0:55] * jclm (~jclm@203.191.203.202) has joined #ceph
[0:55] <monsted> ah, no, there's still the HUS in the midrange market.
[0:55] <monsted> TheSov: all the crappy ones, hopefully.
[0:56] <monsted> and god damn, there's a lot of crappy storage vendors.
[0:56] <TheSov> yes there are
[0:56] <monsted> (some might argue: all of them)
[0:56] <TheSov> when i tell people are ceph and how there is no raid, they dont get it
[0:56] <TheSov> i tell them its built for failure
[0:57] <TheSov> it doesnt compute
[0:57] <monsted> the ones that aren't crap quality are generally designed and run by dinosaurs :)
[0:57] <TheSov> its like they have a mental block installed by emc
[0:57] <snakamoto> TheSov: I like constantly having to call objects "files"
[0:57] <TheSov> not exactly the same but the analogy works for lay people
[0:59] <TheSov> an object can be spanned where as files are not, objects contain data other than that of what files contain etc
[1:00] <TheSov> but i can see why you would do that explaining that is a nightmare
[1:00] <snakamoto> especially when the objects are stored as files = (
[1:03] <TheSov> to a user it makes no difference, all he wants to know is, can u put files in and take them out later? yes, yes you can
[1:04] <TheSov> the conversation with my boss went a little like that, "what happens when a disk fails", "15 minutes later its ejected from the cluster", "what does that mean", "it means the cluster gets smaller but everything is still redundant, we go back and change the disk at our leisure", "sold"
[1:05] <TheSov> now my boss's boss is a different story, i have to prove it out to him
[1:05] <TheSov> so i am optimizing my test cluster now
[1:06] <TheSov> anyway i gotta go see a horse about a man, so ill talk to you all tomorrow
[1:06] <snakamoto> have a good one
[1:06] * TheSov (~TheSov@204.13.200.248) Quit (Read error: Connection reset by peer)
[1:07] * rendar (~I@host118-186-dynamic.21-87-r.retail.telecomitalia.it) Quit ()
[1:10] * CheKoLyN (~saguilar@bender.parc.xerox.com) Quit (Remote host closed the connection)
[1:11] * kmARC (~kmARC@80-219-254-3.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:12] * Sysadmin88 (~IceChat77@2.125.96.238) has joined #ceph
[1:14] * kmARC_ (~kmARC@80-219-254-3.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:14] * Azru (~Enikma@7R2AADOCB.tor-irc.dnsbl.oftc.net) Quit ()
[1:14] * Curt` (~Epi@5.9.158.75) has joined #ceph
[1:15] <monsted> i want to see his cluster physically eject a bad disk
[1:16] <monsted> "Blargh!" and the disk drops on the floor in front of the rack
[1:17] * sebastian_ (~sebastian@194-118-8-11.adsl.highway.telekom.at) has joined #ceph
[1:18] * gfidente (~gfidente@0001ef4b.user.oftc.net) has joined #ceph
[1:18] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[1:18] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[1:19] * gfidente (~gfidente@0001ef4b.user.oftc.net) Quit ()
[1:20] * ivotron (~ivotron@eduroam-169-233-197-33.ucsc.edu) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[1:21] * sebastian_ (~sebastian@194-118-8-11.adsl.highway.telekom.at) Quit (Quit: Verlassend)
[1:22] * reed (~reed@2607:f298:a:607:29b1:9870:a5dd:125d) Quit (Ping timeout: 480 seconds)
[1:24] * reed (~reed@2607:f298:a:607:29b1:9870:a5dd:125d) has joined #ceph
[1:27] * fsimonce` (~simon@host93-234-dynamic.252-95-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[1:37] * segutier (~segutier@sfo-vpn1.shawnlower.net) Quit (Ping timeout: 480 seconds)
[1:39] * neurodrone_ (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) has joined #ceph
[1:44] * Curt` (~Epi@9S0AADDUG.tor-irc.dnsbl.oftc.net) Quit ()
[1:44] * dontron (~nartholli@chomsky.torservers.net) has joined #ceph
[1:45] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) Quit (Ping timeout: 480 seconds)
[1:56] * oms101 (~oms101@p20030057EA083E00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:56] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[2:01] * oms101 (~oms101@p20030057EA08C400EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[2:04] * kbader (~Adium@64.169.30.57) has joined #ceph
[2:14] * dontron (~nartholli@5NZAAF8T3.tor-irc.dnsbl.oftc.net) Quit ()
[2:14] * DoDzy (~w0lfeh@spftor1e1.privacyfoundation.ch) has joined #ceph
[2:15] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[2:22] * xarses_ (~xarses@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:22] * scuttlemonkey is now known as scuttle|afk
[2:28] * thansen (~thansen@17.253.sfcn.org) Quit (Quit: Ex-Chat)
[2:36] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[2:38] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[2:38] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[2:38] * reed (~reed@2607:f298:a:607:29b1:9870:a5dd:125d) Quit (Quit: Ex-Chat)
[2:39] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[2:44] * DoDzy (~w0lfeh@9S0AADDWQ.tor-irc.dnsbl.oftc.net) Quit ()
[2:44] * ylmson (~vegas3@tor.piratenpartei-nrw.de) has joined #ceph
[2:46] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[2:53] * kbader (~Adium@64.169.30.57) Quit (Quit: Leaving.)
[3:03] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:06] * joshd (~jdurgin@ip-64-134-128-17.public.wayport.net) has joined #ceph
[3:09] * zaitcev (~zaitcev@ip-64-134-128-17.public.wayport.net) has joined #ceph
[3:10] * zhaochao (~zhaochao@125.39.8.233) has joined #ceph
[3:12] * mhack (~mhack@68-184-37-225.dhcp.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[3:13] * nhm (~nhm@ip-64-134-128-17.public.wayport.net) has joined #ceph
[3:13] * ChanServ sets mode +o nhm
[3:14] * ylmson (~vegas3@9S0AADDXJ.tor-irc.dnsbl.oftc.net) Quit ()
[3:14] * thundercloud (~rapedex@198.50.128.236) has joined #ceph
[3:22] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[3:24] * yguang11 (~yguang11@2001:4998:effd:600:b40a:cc7c:d04b:ec54) Quit (Remote host closed the connection)
[3:36] * georgem (~Adium@65-110-211-254.cpe.pppoe.ca) has joined #ceph
[3:38] * shohn1 (~shohn@dslb-094-223-165-069.094.223.pools.vodafone-ip.de) has joined #ceph
[3:42] * shohn (~shohn@dslb-188-102-025-247.188.102.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[3:44] * thundercloud (~rapedex@7R2AADOHR.tor-irc.dnsbl.oftc.net) Quit ()
[3:44] * jacoo (~Cue@marylou.nos-oignons.net) has joined #ceph
[3:45] * sankarshan (~sankarsha@121.244.87.117) has joined #ceph
[3:54] * kbader (~Adium@pool-100-9-210-71.lsanca.fios.verizon.net) has joined #ceph
[3:54] * kbader (~Adium@pool-100-9-210-71.lsanca.fios.verizon.net) Quit ()
[4:01] * bla_ (~bla@2001:67c:670:100:fa0f:41ff:fe58:4033) Quit (Ping timeout: 480 seconds)
[4:03] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[4:05] * kefu (~kefu@114.92.110.67) has joined #ceph
[4:05] * kefu is now known as kefu|afk
[4:05] * zaitcev (~zaitcev@ip-64-134-128-17.public.wayport.net) Quit (Ping timeout: 480 seconds)
[4:07] * kefu|afk is now known as kefu
[4:07] * kefu is now known as kefu|afk
[4:07] * kefu|afk is now known as kefu
[4:08] * zaitcev (~zaitcev@ip-64-134-128-17.public.wayport.net) has joined #ceph
[4:09] * bababurko (~bababurko@70-90-168-211-SFBACalifornia.hfc.comcastbusiness.net) has joined #ceph
[4:12] <bababurko> Does anyone have experience recovering from crashed MDS servers running hammerhead?
[4:13] * codice_ (~toodles@97-94-175-73.static.mtpk.ca.charter.com) has joined #ceph
[4:14] * jacoo (~Cue@5NZAAF8XA.tor-irc.dnsbl.oftc.net) Quit ()
[4:14] * Kakeru (~JamesHarr@tor-exit.katzen.me) has joined #ceph
[4:15] * codice (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (Ping timeout: 480 seconds)
[4:20] * zaitcev (~zaitcev@ip-64-134-128-17.public.wayport.net) Quit (Quit: Bye)
[4:29] * moore (~moore@71-211-70-63.phnx.qwest.net) has joined #ceph
[4:29] * moore (~moore@71-211-70-63.phnx.qwest.net) Quit (Remote host closed the connection)
[4:30] * moore (~moore@64.202.160.233) has joined #ceph
[4:31] * moore (~moore@64.202.160.233) Quit (Remote host closed the connection)
[4:31] * moore (~moore@64.202.160.233) has joined #ceph
[4:35] * cpaquin (~cpaquin@c-24-99-55-9.hsd1.ga.comcast.net) has joined #ceph
[4:35] * codice_ (~toodles@97-94-175-73.static.mtpk.ca.charter.com) Quit (Read error: Connection reset by peer)
[4:35] * cpaquin (~cpaquin@c-24-99-55-9.hsd1.ga.comcast.net) Quit ()
[4:37] * cpaquin (~cpaquin@c-24-99-55-9.hsd1.ga.comcast.net) has joined #ceph
[4:39] <cpaquin> Ping Ceph: Is the number of PGs created when creating a pool equal to the number of PGs specified in the create command multiplied my the number or replicas
[4:42] * bkopilov (~bkopilov@bzq-109-66-56-13.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[4:44] * Kakeru (~JamesHarr@5NZAAF8YC.tor-irc.dnsbl.oftc.net) Quit ()
[4:44] * biGGer (~cryptk@spftor1e1.privacyfoundation.ch) has joined #ceph
[4:48] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) has joined #ceph
[4:52] * bababurko (~bababurko@70-90-168-211-SFBACalifornia.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[4:55] * cpaquin (~cpaquin@c-24-99-55-9.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[4:58] * rotbeard (~redbeard@cm-171-100-223-199.revip10.asianet.co.th) has joined #ceph
[5:00] * georgem (~Adium@65-110-211-254.cpe.pppoe.ca) Quit (Quit: Leaving.)
[5:10] * Sysadmin88 (~IceChat77@2.125.96.238) Quit (Quit: The early bird may get the worm, but the second mouse gets the cheese)
[5:12] * moore (~moore@64.202.160.233) Quit (Remote host closed the connection)
[5:12] * lucas1 (~Thunderbi@218.76.52.64) Quit (Quit: lucas1)
[5:14] * biGGer (~cryptk@7R2AADOLG.tor-irc.dnsbl.oftc.net) Quit ()
[5:14] * KristopherBel (~Mraedis@politkovskaja.torservers.net) has joined #ceph
[5:18] * kefu is now known as kefu|afk
[5:19] * kefu|afk (~kefu@114.92.110.67) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[5:20] * kefu (~kefu@114.92.110.67) has joined #ceph
[5:20] * kefu is now known as kefu|afk
[5:21] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[5:26] * kefu|afk (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[5:27] * kefu (~kefu@114.92.110.67) has joined #ceph
[5:27] * yanzheng (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[5:28] * yguang11 (~yguang11@nat-dip15.fw.corp.yahoo.com) has joined #ceph
[5:28] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[5:29] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[5:32] * kefu (~kefu@114.92.110.67) has joined #ceph
[5:36] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[5:37] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[5:38] * kefu (~kefu@114.92.110.67) has joined #ceph
[5:43] * neurodrone_ (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[5:44] * yanzheng (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[5:44] * KristopherBel (~Mraedis@9S0AADD2L.tor-irc.dnsbl.oftc.net) Quit ()
[5:44] * CoMa (~rf`@192.42.116.16) has joined #ceph
[5:45] * SongboWang (~oftc-webi@107.170.240.110) has joined #ceph
[5:46] * joshd (~jdurgin@ip-64-134-128-17.public.wayport.net) Quit (Ping timeout: 480 seconds)
[5:46] * overclk (~overclk@121.244.87.117) has joined #ceph
[5:50] * cpaquin (~cpaquin@c-24-99-55-9.hsd1.ga.comcast.net) has joined #ceph
[5:50] * logan (~a@216.144.251.246) Quit (Ping timeout: 480 seconds)
[5:53] * Vacuum__ (~Vacuum@i59F79974.versanet.de) has joined #ceph
[5:55] * logan (~a@216.144.251.246) has joined #ceph
[5:55] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[5:55] * kbader (~Adium@pool-100-9-210-71.lsanca.fios.verizon.net) has joined #ceph
[5:56] * kbader (~Adium@pool-100-9-210-71.lsanca.fios.verizon.net) Quit ()
[5:58] * yguang11 (~yguang11@nat-dip15.fw.corp.yahoo.com) Quit (Remote host closed the connection)
[5:59] * yguang11 (~yguang11@2001:4998:effd:7804::101a) has joined #ceph
[5:59] * kefu is now known as kefu|afk
[5:59] * kefu|afk (~kefu@114.92.110.67) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:00] * Vacuum_ (~Vacuum@i59F7915E.versanet.de) Quit (Ping timeout: 480 seconds)
[6:00] * kefu (~kefu@114.92.110.67) has joined #ceph
[6:01] * kefu is now known as kefu|afk
[6:04] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[6:08] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[6:11] * kefu|afk (~kefu@114.92.110.67) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:12] * kefu (~kefu@114.92.110.67) has joined #ceph
[6:12] * kefu is now known as kefu|afk
[6:13] * SongboWang (~oftc-webi@107.170.240.110) Quit (Quit: Page closed)
[6:13] * songbowang (~oftc-webi@107.170.240.110) has joined #ceph
[6:14] * CoMa (~rf`@7R2AADONT.tor-irc.dnsbl.oftc.net) Quit ()
[6:14] * Cue (~Frostshif@ks.whyrlpool.com) has joined #ceph
[6:21] * yguang11 (~yguang11@2001:4998:effd:7804::101a) Quit (Remote host closed the connection)
[6:22] * yguang11 (~yguang11@2001:4998:effd:7804::101a) has joined #ceph
[6:22] * kefu|afk (~kefu@114.92.110.67) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:23] * kefu (~kefu@114.92.110.67) has joined #ceph
[6:23] * kefu is now known as kefu|afk
[6:27] * joshd (~jdurgin@ip-64-134-128-17.public.wayport.net) has joined #ceph
[6:29] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[6:29] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[6:31] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) Quit (Quit: WeeChat 1.2)
[6:33] * kefu|afk is now known as kefu
[6:40] * flisky (~Thunderbi@106.39.60.34) Quit (Remote host closed the connection)
[6:40] * rotbeard (~redbeard@cm-171-100-223-199.revip10.asianet.co.th) Quit (Quit: Leaving)
[6:42] * ivotron (~ivotron@c-67-169-145-20.hsd1.ca.comcast.net) has joined #ceph
[6:43] * flisky (~Thunderbi@106.39.60.34) has joined #ceph
[6:44] * Cue (~Frostshif@5NZAAF81W.tor-irc.dnsbl.oftc.net) Quit ()
[6:44] * Revo84 (~rf`@199.68.196.124) has joined #ceph
[6:46] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) Quit (Ping timeout: 480 seconds)
[6:47] * lucas1 (~Thunderbi@218.76.52.64) Quit (Ping timeout: 480 seconds)
[6:58] * songbowang (~oftc-webi@107.170.240.110) Quit (Ping timeout: 480 seconds)
[6:58] * snakamoto (~Adium@192.16.26.2) Quit (Quit: Leaving.)
[7:01] * segutier (~segutier@12.51.62.253) has joined #ceph
[7:04] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[7:06] * shyu (~Shanzhi@119.254.120.66) Quit (Ping timeout: 480 seconds)
[7:07] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[7:11] * yguang11 (~yguang11@2001:4998:effd:7804::101a) Quit (Remote host closed the connection)
[7:12] * yguang11 (~yguang11@2001:4998:effd:7804::101a) has joined #ceph
[7:13] * kefu (~kefu@114.92.110.67) has joined #ceph
[7:14] * yguang11 (~yguang11@2001:4998:effd:7804::101a) Quit ()
[7:14] * Revo84 (~rf`@9S0AADD5C.tor-irc.dnsbl.oftc.net) Quit ()
[7:14] * Chrissi_ (~Pettis@5NZAAF83M.tor-irc.dnsbl.oftc.net) has joined #ceph
[7:17] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[7:18] * kefu (~kefu@114.92.110.67) has joined #ceph
[7:21] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:21] * lcurtis (~lcurtis@47.19.105.250) Quit (Remote host closed the connection)
[7:21] * codice (~toodles@75-128-34-237.static.mtpk.ca.charter.com) has joined #ceph
[7:34] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:41] * joshd (~jdurgin@ip-64-134-128-17.public.wayport.net) Quit (Quit: Leaving.)
[7:43] * sleinen1 (~Adium@2001:620:0:69::100) Quit (Read error: Connection reset by peer)
[7:44] * Chrissi_ (~Pettis@5NZAAF83M.tor-irc.dnsbl.oftc.net) Quit ()
[7:48] * segutier (~segutier@12.51.62.253) Quit (Quit: segutier)
[7:56] * b0e (~aledermue@213.95.25.82) has joined #ceph
[8:11] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[8:14] * Kaervan (~xanax`@tor2e1.privacyfoundation.ch) has joined #ceph
[8:15] * bearkitten (~bearkitte@cpe-76-172-86-115.socal.res.rr.com) has joined #ceph
[8:17] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:20] * neurodrone (~neurodron@162.243.191.67) Quit (Ping timeout: 480 seconds)
[8:20] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[8:26] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[8:26] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[8:29] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[8:35] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) has joined #ceph
[8:36] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[8:37] * nardial (~ls@dslb-178-006-188-098.178.006.pools.vodafone-ip.de) has joined #ceph
[8:42] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[8:42] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:44] * Kaervan (~xanax`@5NZAAF85B.tor-irc.dnsbl.oftc.net) Quit ()
[8:44] <Be-El> hi
[8:45] * shylesh (~shylesh@121.244.87.118) has joined #ceph
[8:49] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[8:51] * sleinen1 (~Adium@2001:620:0:82::103) has joined #ceph
[8:53] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[8:54] * branto (~branto@ip-213-220-232-132.net.upcbroadband.cz) has joined #ceph
[8:57] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[9:03] <th0m> hello
[9:04] * jclm (~jclm@203.191.203.202) Quit (Quit: Leaving.)
[9:04] * derjohn_mobi (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:07] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:08] * snerd_ is now known as snerd
[9:10] * arcimboldo (~antonio@84-75-174-248.dclient.hispeed.ch) has joined #ceph
[9:12] * dgurtner (~dgurtner@178.197.231.115) has joined #ceph
[9:13] * analbeard (~shw@support.memset.com) has joined #ceph
[9:14] * hifi (~mason@89.105.194.70) has joined #ceph
[9:17] * kefu (~kefu@114.92.110.67) has joined #ceph
[9:19] * borourke (~borourke@papanak.ph.ed.ac.uk) has joined #ceph
[9:21] * borourke (~borourke@papanak.ph.ed.ac.uk) has left #ceph
[9:22] <arcimboldo> hi all, I have an issue with kvm+ceph. I have two rbd volumes attached to the VM and in the compute nodes the number of open connections to the ceph OSDs reach > 2 millions, causing the IO on the vm to freeze.
[9:22] <arcimboldo> the error in the qemu logfile is "Error: socket: Too many open files"
[9:24] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[9:26] * derjohn_mobi (~aj@fw.gkh-setu.de) has joined #ceph
[9:28] <rkeene> It sounds like you have a file descriptor leak.
[9:29] <arcimboldo> a bug in kvm? or librbd?
[9:30] * rendar (~I@95.235.176.14) has joined #ceph
[9:37] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[9:39] * sleinen1 (~Adium@2001:620:0:82::103) Quit (Ping timeout: 480 seconds)
[9:42] <rkeene> Hard to know without doing any actual work :-D
[9:43] <arcimboldo> rkeene, I really need to fix this, so any guidance on how to debug this issue is very welcome...
[9:44] <arcimboldo> I've seen there was an issue with too many *.asock files in /var/run/ceph, but I don't know if this is the case, since I don't even have the directory
[9:44] * hifi (~mason@7R2AADOXT.tor-irc.dnsbl.oftc.net) Quit ()
[9:44] * Frymaster (~oracular@aurora.enn.lu) has joined #ceph
[9:46] * karnan (~karnan@121.244.87.117) has joined #ceph
[9:50] * fsimonce (~simon@host93-234-dynamic.252-95-r.retail.telecomitalia.it) has joined #ceph
[9:51] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) has joined #ceph
[9:54] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[9:55] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[9:56] <arcimboldo> so it seems kvm is create ~2k threads, *after* connecting to the OSDs, so lsof sees around 30-60k connections to each osd
[9:57] <arcimboldo> and I have 36 osd servers (864 osds in total)
[10:00] <rkeene> It'll probably be a few hours before anyone is awake
[10:09] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[10:11] * kmARC (~kmARC@2001:620:20:16:a086:13f2:9e:1dfa) has joined #ceph
[10:13] * Nacer (~Nacer@LCaen-656-1-72-185.w80-13.abo.wanadoo.fr) has joined #ceph
[10:14] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:14] * Frymaster (~oracular@7R2AADOY3.tor-irc.dnsbl.oftc.net) Quit ()
[10:14] * SurfMaths (~AG_Clinto@176.10.99.209) has joined #ceph
[10:15] * kefu_ (~kefu@183.193.158.17) has joined #ceph
[10:15] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:16] * yanzheng (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[10:17] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[10:19] * kefu (~kefu@114.92.110.67) Quit (Ping timeout: 480 seconds)
[10:20] * davidz (~davidz@2605:e000:1313:8003:7544:2e13:1fef:3bc7) Quit (Read error: Connection reset by peer)
[10:20] * davidz (~davidz@2605:e000:1313:8003:2cde:e8d:29fd:1f51) has joined #ceph
[10:35] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) has joined #ceph
[10:40] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[10:41] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) has joined #ceph
[10:44] * SurfMaths (~AG_Clinto@7R2AADO0S.tor-irc.dnsbl.oftc.net) Quit ()
[10:44] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[10:45] * yanzheng (~zhyan@125.71.106.169) Quit (Ping timeout: 480 seconds)
[10:45] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[10:45] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[10:48] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) has joined #ceph
[10:48] * fdmanana (~fdmanana@bl13-153-166.dsl.telepac.pt) has joined #ceph
[10:49] * notarima (~oracular@7R2AADO17.tor-irc.dnsbl.oftc.net) has joined #ceph
[10:53] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) has joined #ceph
[10:56] * karnan (~karnan@121.244.87.117) has joined #ceph
[11:02] * kefu_ (~kefu@183.193.158.17) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:02] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[11:04] * ToMiles (~ToMiles@nl8x.mullvad.net) Quit (Quit: leaving)
[11:05] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[11:08] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[11:08] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[11:08] * ira (~ira@121.244.87.124) has joined #ceph
[11:10] * Miouge (~Miouge@94.136.92.20) Quit ()
[11:10] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit ()
[11:11] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[11:11] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[11:12] * ivotron (~ivotron@c-67-169-145-20.hsd1.ca.comcast.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[11:13] * shyu (~Shanzhi@119.254.120.66) Quit (Remote host closed the connection)
[11:15] * lucas1 (~Thunderbi@218.76.52.64) Quit (Ping timeout: 480 seconds)
[11:19] * notarima (~oracular@7R2AADO17.tor-irc.dnsbl.oftc.net) Quit ()
[11:19] * cmrn (~dug@192.42.115.102) has joined #ceph
[11:27] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[11:29] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) Quit (Quit: It's just that easy)
[11:32] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[11:40] * yanzheng (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[11:41] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[11:44] * arcimboldo (~antonio@84-75-174-248.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[11:49] * bara (~bara@213.175.37.10) has joined #ceph
[11:49] * cmrn (~dug@7R2AADO20.tor-irc.dnsbl.oftc.net) Quit ()
[11:49] * FierceForm (~clusterfu@195.169.125.226) has joined #ceph
[11:54] * yanzheng (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[12:01] * jks (~jks@178.155.151.121) Quit (Ping timeout: 480 seconds)
[12:01] * karnan (~karnan@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:04] * bara (~bara@213.175.37.10) Quit (Ping timeout: 480 seconds)
[12:06] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[12:09] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[12:09] * karnan (~karnan@121.244.87.117) has joined #ceph
[12:33] -dacia.oftc.net- *** Looking up your hostname...
[12:33] -dacia.oftc.net- *** Checking Ident
[12:33] -dacia.oftc.net- *** Couldn't look up your hostname
[12:33] -dacia.oftc.net- *** No Ident response
[12:33] * CephLogBot (~PircBot@92.63.168.213) has joined #ceph
[12:33] * Topic is 'CDS Schedule Posted: http://goo.gl/i72wN8 || http://ceph.com/get || dev channel #ceph-devel || test lab channel #sepia'
[12:33] * Set by scuttlemonkey!~scuttle@nat-pool-rdu-t.redhat.com on Mon Mar 02 21:13:33 CET 2015
[12:36] * dgurtner (~dgurtner@178.197.231.115) Quit (Ping timeout: 480 seconds)
[12:37] * arcimboldo (~antonio@dhcp-y11-zi-s3it-130-60-34-042.uzh.ch) has joined #ceph
[12:43] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[12:43] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[12:43] * georgem (~Adium@65-110-211-254.cpe.pppoe.ca) has joined #ceph
[12:44] * georgem (~Adium@65-110-211-254.cpe.pppoe.ca) Quit ()
[12:44] * georgem (~Adium@206.108.127.16) has joined #ceph
[12:49] * mrapple (~dicko@7R2AADO4W.tor-irc.dnsbl.oftc.net) Quit ()
[12:50] * flisky (~Thunderbi@106.39.60.34) Quit (Quit: flisky)
[12:51] * nisha (~nisha@2406:5600:26:e274:d121:b821:fd9e:2252) has joined #ceph
[12:52] * bara (~bara@nat-pool-brq-u.redhat.com) Quit (Ping timeout: 480 seconds)
[12:52] * arbrandes (~arbrandes@152.249.49.73) has joined #ceph
[12:55] * shyu (~Shanzhi@119.254.120.66) Quit (Remote host closed the connection)
[12:56] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[12:57] * rotbeard (~redbeard@cm-171-100-223-199.revip10.asianet.co.th) has joined #ceph
[13:01] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[13:02] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[13:02] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[13:05] * fdmanana (~fdmanana@bl13-153-166.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[13:05] * linjan (~linjan@176.195.196.88) Quit (Ping timeout: 480 seconds)
[13:07] * dgurtner (~dgurtner@178.197.231.115) has joined #ceph
[13:19] * Kyso (~Diablothe@62SAAACXD.tor-irc.dnsbl.oftc.net) has joined #ceph
[13:23] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[13:27] * shylesh (~shylesh@121.244.87.118) Quit (Ping timeout: 480 seconds)
[13:33] * yanzheng (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[13:34] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[13:34] * yanzheng (~zhyan@125.71.106.169) Quit (Remote host closed the connection)
[13:35] * fdmanana (~fdmanana@bl13-153-166.dsl.telepac.pt) has joined #ceph
[13:39] * shylesh__ (~shylesh@121.244.87.124) has joined #ceph
[13:47] * shylesh__ (~shylesh@121.244.87.124) Quit (Ping timeout: 480 seconds)
[13:49] * Kyso (~Diablothe@62SAAACXD.tor-irc.dnsbl.oftc.net) Quit ()
[13:49] * Altitudes (~Thayli@anon-52-10.vpn.ipredator.se) has joined #ceph
[13:50] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[13:53] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[13:57] * serg (~serg@ip100-115.245.80.crimea.com) has joined #ceph
[13:58] * fdmanana (~fdmanana@bl13-153-166.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[13:59] <serg> hello)
[14:02] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[14:03] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) has joined #ceph
[14:04] * dopesong_ (~dopesong@lb1.mailer.data.lt) has joined #ceph
[14:04] * dynamicudpate (~overonthe@199.68.193.54) has joined #ceph
[14:08] * georgem (~Adium@184.151.178.15) has joined #ceph
[14:09] * nardial (~ls@dslb-178-006-188-098.178.006.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[14:10] * neurodrone (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) has joined #ceph
[14:10] <serg> need some advice: i've installed and deployed ceph cluster with ceph-deploy. which default options should i change for better perfomance? i've changed already pg_num, journal size. maybe there is other important options?
[14:11] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[14:11] * rburkholder (~overonthe@199.68.193.62) Quit (Ping timeout: 480 seconds)
[14:12] <Kingrat> serg, it depends on your cluster size, configuration, and work load
[14:14] <serg> well,it have 3 mons with 3 mds and 12 osd... there should be a file storage with many TB of pictures)
[14:17] <doppelgrau> serg: small SSD-only pool for the cephfs-metadata?!
[14:18] <serg> there are only ssd
[14:19] * Altitudes (~Thayli@7R2AADO7E.tor-irc.dnsbl.oftc.net) Quit ()
[14:19] * ylmson (~Arfed@89.105.194.81) has joined #ceph
[14:24] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[14:24] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[14:24] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[14:26] * fdmanana (~fdmanana@bl13-153-166.dsl.telepac.pt) has joined #ceph
[14:35] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) has joined #ceph
[14:36] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[14:37] * georgem (~Adium@184.151.178.15) Quit (Quit: Leaving.)
[14:37] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[14:38] <serg> people... can you check me? want to know that i've done everything fine... i've created new pool data and metadata with pg_num 800 each. deleted all old pools. created mds newfs with new pools i've created. everything looks fine...
[14:38] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) has joined #ceph
[14:39] <Kingrat> pg_num should be powers of two, so 512 or 1024
[14:40] <Kingrat> you will probably want to increase it
[14:40] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[14:40] * ira (~ira@121.244.87.124) Quit (Ping timeout: 480 seconds)
[14:41] <serg> why not 800? its works fine...
[14:42] * kefu (~kefu@183.193.158.17) has joined #ceph
[14:43] <Kingrat> the placement algorithm is more consistent with powers of two, other values can give you unbalanced placement, i.e. some osds will have more load and/or the data wont be spread as evenly
[14:43] <serg> if developers sad that... i will change it to 1024... but last time i tried to grow pg_num there was an error which say i cannot increase pg_num because it is already have maximum...
[14:44] <Kingrat> you cant grow it by more than a certain percentage at a time
[14:44] <Kingrat> you have to increase it by a certain amount per osd in steps
[14:45] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[14:45] <Kingrat> and yes they do suggest it, if you look at the documentation it is in there
[14:45] <serg> so i have 800 now, what should i enter now? 850/900...../1024 ?
[14:45] <Kingrat> 1024 is the nearest power of 2, and should work ok for 12 osds, probably higher than you need but you cant go back to 512
[14:46] <serg> i can remove all of it and create new pools )
[14:46] <Kingrat> i would probably use 512
[14:46] <serg> i will increase osds a year later up to 24
[14:46] <Kingrat> by then i would go 1024
[14:47] <serg> thx a lot
[14:47] <Kingrat> iirc the general recommendation used to be (num osd * 100)/replica size
[14:47] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[14:48] <Kingrat> so 2400/3 would be 800, nearest would be 1024
[14:48] <serg> yeah :). i have already changed pg_num to 1024, changed without problems :)
[14:48] <serg> scrubbling now... )
[14:49] * ylmson (~Arfed@1ADAAAAKJ.tor-irc.dnsbl.oftc.net) Quit ()
[14:49] * Solvius (~CydeWeys@tor-exit4-readme.dfri.se) has joined #ceph
[14:53] <serg> health HEALTH_WARN pool data pg_num 1024 > pgp_num 800; pool metadata pg_num 1024 > pgp_num 800 --is it ok?)) scrubbling at that time....
[14:53] <doppelgrau> change pgp_num too
[14:54] <serg> =))) thx
[14:54] * bvivek (~bvivek@idp01webcache2-z.apj.hpecore.net) has joined #ceph
[14:55] * mhack (~mhack@68-184-37-225.dhcp.oxfr.ma.charter.com) has joined #ceph
[14:56] * georgem (~Adium@206.108.127.16) has joined #ceph
[14:57] <serg> what about weight of OSD? by default - is enough parameters?
[14:57] <Kingrat> weight should be equal to the size of the osd, more or less that is the typical arrangement
[14:58] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[14:58] <Kingrat> some people tweak it to rebalance a little if they have one getting full but that generally isnt a problem
[15:00] <serg> if my osd's have same size -is it possible that one of them become full when other - not? and i cant understand what weight value means... 1.0=100% of osd?
[15:00] * smerz (~ircircirc@37.74.194.90) has joined #ceph
[15:03] <doppelgrau> weight is relativ
[15:03] <doppelgrau> and the distribution is not allways as perfekt as everybody hopes
[15:06] <Kingrat> 1.0 normally means 1tb in most deployments
[15:06] <Kingrat> but it is relative like doppelgrau said, it could be different
[15:06] <monsted> with more PGs the distribution might be better?
[15:06] <Kingrat> that is the idea, but more pgs also use more ram and cpu
[15:07] <Kingrat> so there is a sweet spot
[15:07] <monsted> swift seems to be smarter on that point :)
[15:08] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[15:09] * neurodrone (~neurodron@pool-100-1-89-227.nwrknj.fios.verizon.net) Quit (Quit: neurodrone)
[15:12] * trochej (~trochej@217.8.185.189) has joined #ceph
[15:13] <serg> thx
[15:19] * Solvius (~CydeWeys@1ADAAAAME.tor-irc.dnsbl.oftc.net) Quit ()
[15:19] * uhtr5r (~dontron@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[15:22] * cpaquin (~cpaquin@c-24-99-55-9.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[15:24] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) has joined #ceph
[15:25] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[15:25] * bvivek_ (~bvivek@idp01webcache4-z.apj.hpecore.net) has joined #ceph
[15:27] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[15:28] * bvivek (~bvivek@idp01webcache2-z.apj.hpecore.net) Quit (Read error: Connection reset by peer)
[15:32] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:34] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[15:35] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) has joined #ceph
[15:37] * bkopilov (~bkopilov@bzq-109-66-56-13.red.bezeqint.net) has joined #ceph
[15:47] * tkheg (~tkheg@80.237.142.138) has joined #ceph
[15:47] * tkheg (~tkheg@80.237.142.138) Quit ()
[15:49] * uhtr5r (~dontron@1ADAAAAN5.tor-irc.dnsbl.oftc.net) Quit ()
[15:49] * Kaervan (~Spessu@atlantic480.us.unmetered.com) has joined #ceph
[15:54] * tkheg (~tkheg@80.237.142.138) has joined #ceph
[15:55] * tkheg (~tkheg@80.237.142.138) Quit ()
[15:55] <zenpac> Will calamari show an event if an (Mon, OSD, MDS, RGW) service is added?
[15:56] * madkiss (~madkiss@2001:6f8:12c3:f00f:bcb9:662d:bf1e:7c82) has joined #ceph
[15:57] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[15:57] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[15:57] <serg> question about monitoring ceph: who which monitoring service use? which is better?
[16:03] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) has joined #ceph
[16:08] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[16:08] <doppelgrau> nagios + cephdash
[16:13] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[16:15] * ifur (~osm@0001f63e.user.oftc.net) Quit (Quit: Lost terminal)
[16:17] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[16:17] * rotbeard (~redbeard@cm-171-100-223-199.revip10.asianet.co.th) Quit (Ping timeout: 480 seconds)
[16:19] * Kaervan (~Spessu@7R2AADPCI.tor-irc.dnsbl.oftc.net) Quit ()
[16:19] * Throlkim (~MKoR@heaven.tor.ninja) has joined #ceph
[16:20] * davidz (~davidz@2605:e000:1313:8003:2cde:e8d:29fd:1f51) Quit (Quit: Leaving.)
[16:21] * davidz (~davidz@2605:e000:1313:8003:2cde:e8d:29fd:1f51) has joined #ceph
[16:21] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) Quit (Remote host closed the connection)
[16:23] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[16:23] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[16:25] <loicd> is there a way to rados ls so that it show even internal objects (like hit set objects in a cache pool)
[16:25] <loicd> ?
[16:26] * kefu (~kefu@183.193.158.17) Quit (Remote host closed the connection)
[16:27] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) has joined #ceph
[16:27] * kefu (~kefu@183.193.158.17) has joined #ceph
[16:28] * GnikLlort (~osama@vc-gp-n-41-13-208-188.umts.vodacom.co.za) has joined #ceph
[16:28] * yanzheng (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[16:29] <TMM> If I'm deploying ceph osd nodes with only SSDs should I just place the journal on the same physical drive as the data?
[16:29] <serg> TMM as i know journal should be maximum close to osd data....
[16:30] <TMM> ok so if my OSDs have 8 SSDs I should just put each osd's journal on the same drive then
[16:31] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:32] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) Quit ()
[16:32] * wushudoin (~wushudoin@2601:646:8201:7769:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:32] <smerz> TMM: yes
[16:33] <TMM> thanks smerz, serg
[16:34] * dopesong_ (~dopesong@lb1.mailer.data.lt) Quit (Ping timeout: 480 seconds)
[16:35] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) has joined #ceph
[16:35] <serg> TMM if you use ceph-deploy journal creates in the same drive with osd
[16:35] * dopesong (~dopesong@88-119-94-55.static.zebra.lt) Quit (Ping timeout: 480 seconds)
[16:36] <TMM> no, we're using puppet to deploy the cluster
[16:38] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[16:40] * jklare (~jklare@185.27.181.36) Quit (Ping timeout: 480 seconds)
[16:40] * kefu_ (~kefu@183.193.158.17) has joined #ceph
[16:41] <loicd> kefu: do you know the answer to this by any chance ? After running rados -p fast cache-flush-evict-all I see 6 objects left (as reported via ceph df) and I assume these are internal hitset objects. This is in hammer.
[16:42] * erice (~erice@c-76-120-53-165.hsd1.co.comcast.net) has joined #ceph
[16:42] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[16:43] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[16:43] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[16:43] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:46] <serg> i have seen that trouble too
[16:46] <serg> at my last test cluster at virtualbox)
[16:47] * kefu (~kefu@183.193.158.17) Quit (Ping timeout: 480 seconds)
[16:47] <serg> created many files and after deleting them i got less free space than before)
[16:47] * kmARC (~kmARC@2001:620:20:16:a086:13f2:9e:1dfa) Quit (Ping timeout: 480 seconds)
[16:48] * squizzi (~squizzi@nat-pool-rdu-t.redhat.com) has joined #ceph
[16:48] <serg> does anyone know solution?
[16:49] * Throlkim (~MKoR@1ADAAAAQW.tor-irc.dnsbl.oftc.net) Quit ()
[16:49] * pepzi (~Architect@176.10.99.205) has joined #ceph
[16:51] * fitzdsl (~Romain@dedibox.fitzdsl.net) has joined #ceph
[16:51] * fitzdsl (~Romain@dedibox.fitzdsl.net) Quit ()
[16:51] * kefu_ is now known as kefu|afk
[16:51] * kefu|afk is now known as kefu_
[16:54] * kefu_ is now known as kefu
[16:54] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) has joined #ceph
[16:55] * serg (~serg@ip100-115.245.80.crimea.com) Quit (Quit: Leaving)
[16:56] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:56] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) has joined #ceph
[16:56] <loicd> looks right kefu ( rados --namespace .ceph-internal -p fast ls )
[16:56] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[16:58] <kefu> loicd, do you have the name of these objects?
[16:58] <loicd> kefu: yes:
[16:58] <loicd> hit_set_4.3_archive_2015-08-10 14:21:26.133292_2015-08-11 10:42:57.463074
[16:58] <loicd> etc.
[16:59] <loicd> self explanatory :-)
[16:59] <kefu> right =)
[16:59] <kefu> and localtime, i am sure =D
[17:01] * shylesh__ (~shylesh@59.95.69.48) has joined #ceph
[17:02] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:04] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:05] * yguang11 (~yguang11@66.228.162.44) has joined #ceph
[17:08] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) has joined #ceph
[17:09] * ircolle (~Adium@2601:285:201:2bf9:b562:f1b0:889:2f91) has joined #ceph
[17:09] * ircolle (~Adium@2601:285:201:2bf9:b562:f1b0:889:2f91) Quit (Remote host closed the connection)
[17:11] * kefu_ (~kefu@183.193.158.17) has joined #ceph
[17:14] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[17:14] * GnikLlort (~osama@vc-gp-n-41-13-208-188.umts.vodacom.co.za) Quit (Quit: Leaving)
[17:15] * arbrandes (~arbrandes@152.249.49.73) Quit (Ping timeout: 480 seconds)
[17:16] * kefu (~kefu@183.193.158.17) Quit (Ping timeout: 480 seconds)
[17:18] * TheSov (~TheSov@cip-248.trustwave.com) has joined #ceph
[17:19] * pepzi (~Architect@1ADAAAATC.tor-irc.dnsbl.oftc.net) Quit ()
[17:19] * JamesHarrison (~Misacorp@195.169.125.226) has joined #ceph
[17:21] <TheSov> anyone have a good howto on active/active nfs?
[17:21] * zaitcev (~zaitcev@ip-64-134-128-17.public.wayport.net) has joined #ceph
[17:22] * kefu (~kefu@183.193.158.17) has joined #ceph
[17:23] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:25] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) has joined #ceph
[17:27] * kefu_ (~kefu@183.193.158.17) Quit (Ping timeout: 480 seconds)
[17:29] * zaitcev (~zaitcev@ip-64-134-128-17.public.wayport.net) Quit (Quit: Bye)
[17:29] * yanzheng (~zhyan@125.71.106.169) has joined #ceph
[17:30] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) Quit (Remote host closed the connection)
[17:32] * ircolle (~Adium@2601:285:201:2bf9:9512:5f93:4ed4:6cc2) has joined #ceph
[17:33] * danieagle (~Daniel@187.35.202.112) has joined #ceph
[17:34] * kmARC (~kmARC@80-219-254-3.dclient.hispeed.ch) has joined #ceph
[17:36] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) has joined #ceph
[17:37] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[17:39] * nhm (~nhm@ip-64-134-128-17.public.wayport.net) Quit (Ping timeout: 480 seconds)
[17:40] * kevinc (~kevinc__@client65-131.sdsc.edu) has joined #ceph
[17:42] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:43] * linjan (~linjan@176.195.196.88) has joined #ceph
[17:45] * bvivek_ (~bvivek@idp01webcache4-z.apj.hpecore.net) Quit (Read error: Connection reset by peer)
[17:47] * moore (~moore@64.202.160.88) has joined #ceph
[17:47] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) has joined #ceph
[17:47] * Nacer (~Nacer@LCaen-656-1-72-185.w80-13.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[17:49] * JamesHarrison (~Misacorp@1ADAAAAU3.tor-irc.dnsbl.oftc.net) Quit ()
[17:52] * kmARC (~kmARC@80-219-254-3.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[17:53] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[17:53] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[17:53] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[17:53] * nisha (~nisha@2406:5600:26:e274:d121:b821:fd9e:2252) Quit (Ping timeout: 480 seconds)
[17:55] * branto (~branto@ip-213-220-232-132.net.upcbroadband.cz) Quit (Quit: Leaving.)
[17:55] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) has joined #ceph
[17:56] <TheSov> or is possible to run nfs over cephfs?
[17:57] * arcimboldo (~antonio@dhcp-y11-zi-s3it-130-60-34-042.uzh.ch) Quit (Ping timeout: 480 seconds)
[17:59] * kefu (~kefu@183.193.158.17) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:02] * kefu (~kefu@114.92.110.67) has joined #ceph
[18:05] * nisha (~nisha@2406:5600:25:2bf1:7112:e5cc:293c:d1af) has joined #ceph
[18:05] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[18:07] * segutier (~segutier@sfo-vpn1.shawnlower.net) has joined #ceph
[18:07] * kefu (~kefu@114.92.110.67) has joined #ceph
[18:08] * Hemanth (~Hemanth@117.221.99.71) has joined #ceph
[18:08] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:10] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:15] <m0zes> ganesha
[18:16] * reed (~reed@2607:f298:a:607:29b1:9870:a5dd:125d) has joined #ceph
[18:18] * kefu (~kefu@114.92.110.67) Quit (Max SendQ exceeded)
[18:19] * kefu (~kefu@114.92.110.67) has joined #ceph
[18:19] * ifur (~osm@0001f63e.user.oftc.net) Quit (Quit: Lost terminal)
[18:19] * SaneSmith (~csharp@62SAAADEM.tor-irc.dnsbl.oftc.net) has joined #ceph
[18:19] <Be-El> or standard linux nfs
[18:20] <Be-El> the advantage of ganesha is the fact that ganesha supports pnfs and can operate on libcephfs directly
[18:20] * ifur (~osm@0001f63e.user.oftc.net) has joined #ceph
[18:20] * ira (~ira@1.186.32.22) has joined #ceph
[18:21] * xarses_ (~xarses@12.164.168.117) has joined #ceph
[18:29] * scuttle|afk is now known as scuttlemonkey
[18:30] * lcurtis (~lcurtis@47.19.105.250) has joined #ceph
[18:33] * arbrandes (~arbrandes@152.249.49.73) has joined #ceph
[18:39] * ivotron (~ivotron@c-67-169-145-20.hsd1.ca.comcast.net) has joined #ceph
[18:40] * rotbeard (~redbeard@cm-171-100-223-199.revip10.asianet.co.th) has joined #ceph
[18:49] * SaneSmith (~csharp@62SAAADEM.tor-irc.dnsbl.oftc.net) Quit ()
[18:49] * Dinnerbone (~Sirrush@195.169.125.226) has joined #ceph
[18:53] <TheSov> hmm what is ganesha?
[18:53] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[18:56] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[19:00] * rotbeard (~redbeard@cm-171-100-223-199.revip10.asianet.co.th) Quit (Quit: Leaving)
[19:01] * yanzheng1 (~zhyan@107.170.210.7) has joined #ceph
[19:08] * yanzheng (~zhyan@125.71.106.169) Quit (Ping timeout: 480 seconds)
[19:16] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[19:17] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[19:18] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) has joined #ceph
[19:19] * Dinnerbone (~Sirrush@62SAAADGA.tor-irc.dnsbl.oftc.net) Quit ()
[19:19] * Pirate (~Jebula@aurora.enn.lu) has joined #ceph
[19:19] * zacbri (~zacbri@2a01:e35:2e1e:a70:a148:491b:ca5b:c2b) Quit (Remote host closed the connection)
[19:21] * yanzheng1 (~zhyan@107.170.210.7) Quit (Ping timeout: 480 seconds)
[19:22] * yanzheng1 (~zhyan@125.71.106.169) has joined #ceph
[19:30] * vata (~vata@ARennes-652-1-147-207.w92-139.abo.wanadoo.fr) has joined #ceph
[19:30] * Pirate (~Jebula@7R2AADPL0.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[19:30] * Bonzaii (~Dinnerbon@li747-151.members.linode.com) has joined #ceph
[19:32] * brutuscat (~brutuscat@234.Red-79-151-98.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:34] * snakamoto (~Adium@192.16.26.2) has joined #ceph
[19:34] * snakamoto1 (~Adium@192.16.26.2) has joined #ceph
[19:40] * yanzheng1 (~zhyan@125.71.106.169) Quit (Quit: This computer has gone to sleep)
[19:42] * snakamoto (~Adium@192.16.26.2) Quit (Ping timeout: 480 seconds)
[19:42] * Hemanth (~Hemanth@117.221.99.71) Quit (Ping timeout: 480 seconds)
[19:47] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[19:47] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[19:48] <TheSov> very nice looks like ganesha is good
[19:48] * linjan (~linjan@176.195.196.88) Quit (Ping timeout: 480 seconds)
[19:51] * kefu_ (~kefu@114.92.110.67) has joined #ceph
[19:55] * elder_ (~elder@h69-130-42-166.pqlkmn.broadband.dynamic.tds.net) has joined #ceph
[19:55] * kefu (~kefu@114.92.110.67) Quit (Ping timeout: 480 seconds)
[19:56] * kmARC (~kmARC@80-219-254-3.dclient.hispeed.ch) has joined #ceph
[20:00] * linjan (~linjan@176.195.196.88) has joined #ceph
[20:00] * bababurko (~bababurko@c-73-223-191-162.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[20:00] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[20:00] * Bonzaii (~Dinnerbon@62SAAADHV.tor-irc.dnsbl.oftc.net) Quit ()
[20:00] * matx (~Plesioth@spftor1e1.privacyfoundation.ch) has joined #ceph
[20:02] * kefu_ (~kefu@114.92.110.67) Quit (Quit: Textual IRC Client: www.textualapp.com)
[20:05] * Hemanth (~Hemanth@117.221.97.86) has joined #ceph
[20:10] * shohn (~shohn@dslb-094-223-165-069.094.223.pools.vodafone-ip.de) has joined #ceph
[20:10] * haomaiwa_ (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) has joined #ceph
[20:12] * kefu (~kefu@114.92.110.67) has joined #ceph
[20:15] * ircolle1 (~Adium@2601:285:201:2bf9:9512:5f93:4ed4:6cc2) has joined #ceph
[20:15] * ircolle (~Adium@2601:285:201:2bf9:9512:5f93:4ed4:6cc2) Quit (Read error: Connection reset by peer)
[20:16] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[20:16] * shohn1 (~shohn@dslb-094-223-165-069.094.223.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[20:17] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[20:17] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[20:18] * shohn (~shohn@dslb-094-223-165-069.094.223.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[20:19] * kefu_ (~kefu@183.193.158.17) has joined #ceph
[20:20] * jklare (~jklare@185.27.181.36) has joined #ceph
[20:21] * derjohn_mobi (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[20:21] * kefu (~kefu@114.92.110.67) Quit (Read error: Connection reset by peer)
[20:29] * kefu_ (~kefu@183.193.158.17) Quit (Ping timeout: 480 seconds)
[20:30] * matx (~Plesioth@7R2AADPNH.tor-irc.dnsbl.oftc.net) Quit ()
[20:30] * JamesHarrison (~DougalJac@r3.geoca.st) has joined #ceph
[20:32] * linjan (~linjan@176.195.196.88) Quit (Ping timeout: 480 seconds)
[20:33] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) has joined #ceph
[20:34] * Vacuum__ (~Vacuum@i59F79974.versanet.de) Quit (Quit: leaving)
[20:34] * ivotron (~ivotron@c-67-169-145-20.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[20:34] * ivotron (~ivotron@c-67-169-145-20.hsd1.ca.comcast.net) has joined #ceph
[20:39] * haomaiwa_ (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[20:42] * Vacuum_ (~Vacuum@i59F79974.versanet.de) has joined #ceph
[20:49] * b0e (~aledermue@p5083D6AA.dip0.t-ipconnect.de) has joined #ceph
[20:49] * ira (~ira@1.186.32.22) Quit (Quit: Leaving)
[20:53] * bsanders (~billysand@russell.dreamhost.com) Quit (Quit: leaving)
[20:59] * shylesh__ (~shylesh@59.95.69.48) Quit (Remote host closed the connection)
[21:00] * ircolle (~Adium@2601:285:201:2bf9:9512:5f93:4ed4:6cc2) has joined #ceph
[21:00] * ircolle1 (~Adium@2601:285:201:2bf9:9512:5f93:4ed4:6cc2) Quit (Read error: Connection reset by peer)
[21:00] * ivotron (~ivotron@c-67-169-145-20.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[21:00] * JamesHarrison (~DougalJac@62SAAADLQ.tor-irc.dnsbl.oftc.net) Quit ()
[21:02] * campee (~campee@c-50-148-149-27.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[21:02] * kmARC (~kmARC@80-219-254-3.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:02] * markl (~mark@knm.org) Quit (Remote host closed the connection)
[21:03] * campee (~campee@c-50-148-149-27.hsd1.ca.comcast.net) has joined #ceph
[21:03] * npcomp (~npcomp@c-24-126-240-124.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[21:03] * markl (~mark@knm.org) has joined #ceph
[21:03] * npcomp (~npcomp@c-24-126-240-124.hsd1.ga.comcast.net) has joined #ceph
[21:03] * Hemanth (~Hemanth@117.221.97.86) Quit (Quit: Leaving)
[21:11] * dolgner (~textual@50-192-42-249-static.hfc.comcastbusiness.net) has joined #ceph
[21:15] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:17] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[21:21] * elder_ (~elder@h69-130-42-166.pqlkmn.broadband.dynamic.tds.net) Quit (Quit: Leaving)
[21:26] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[21:27] * TheSov (~TheSov@cip-248.trustwave.com) Quit (Quit: Leaving)
[21:29] * b0e (~aledermue@p5083D6AA.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[21:30] * bene2 (~bene@nat-pool-bos-t.redhat.com) Quit (Quit: Konversation terminated!)
[21:30] * Ralth (~w0lfeh@tor-exit.squirrel.theremailer.net) has joined #ceph
[21:31] * nisha (~nisha@2406:5600:25:2bf1:7112:e5cc:293c:d1af) Quit (Quit: Leaving)
[21:31] * bababurko (~bababurko@70-90-168-211-SFBACalifornia.hfc.comcastbusiness.net) has joined #ceph
[21:38] * rendar (~I@95.235.176.14) Quit (Ping timeout: 480 seconds)
[21:40] * rendar (~I@95.235.176.14) has joined #ceph
[21:40] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[21:40] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[21:43] * julen (~julen@2001:638:70e:11:1cc9:5acd:fea4:a66c) Quit (Ping timeout: 480 seconds)
[21:43] * segutier (~segutier@sfo-vpn1.shawnlower.net) Quit (Ping timeout: 480 seconds)
[21:46] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[21:46] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[21:51] * julen (~julen@2001:638:70e:11:296c:604b:e44d:c98a) has joined #ceph
[21:54] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[21:55] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) has joined #ceph
[22:00] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[22:00] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[22:00] * emre (~edemirors@97.65.225.1) Quit (Remote host closed the connection)
[22:00] * Ralth (~w0lfeh@1ADAAAA8U.tor-irc.dnsbl.oftc.net) Quit ()
[22:00] * puvo (~Kristophe@7R2AADPTL.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:05] * kmARC (~kmARC@80-219-254-3.dclient.hispeed.ch) has joined #ceph
[22:05] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[22:05] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[22:07] * sasha (~achuzhoy@BURLON0309W-LP130-01-1279476678.dsl.bell.ca) has joined #ceph
[22:07] <sasha> ircolle: ++
[22:07] <sasha> Hi all
[22:08] <sasha> Trying to install ceph for the first time and following the guide. Completed the preflight check here: http://ceph.com/docs/master/start/quick-start-preflight/
[22:08] <sasha> failing to run ceph-deploy (Failing to install: ceph-osd ceph-mds ceph-mon ceph-radosgw rpms )
[22:09] <off_rhoden> sasha: if you are seeing it reference those RPMs, are you on RHEL?
[22:09] <sasha> off_rhoden: guilty as charged :) rhel7.1
[22:09] <off_rhoden> sasha: then you need to provide a release with ceph-deploy. So instead of "ceph-deploy install host host host..." do "ceph-deploy install --release hammer host host host..."
[22:10] <off_rhoden> otherwise it's trying to install the RHCS packages from Red hat.
[22:10] <sasha> off_rhoden: umm, is it documented somewhere? i.e. it's not in the guide
[22:10] <off_rhoden> it's been in the ceph-deploy release notes at some point. :)
[22:11] <off_rhoden> but you are right - it needs to be in the ceph docs for sure
[22:11] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[22:12] <sasha> off_rhoden: Failed to execute command: yum -y install epel-release :)
[22:12] <sasha> off_rhoden: will try to install it manually on the nodes
[22:13] <off_rhoden> sasha: you are the second person to tell me about that in the last 24 hours. :) I'll have to check into it. I don't think we've seen anyone install upstream on RHEL in a while.
[22:14] <off_rhoden> sasha: alternatively, you can enable rhel-7-server-optional-rpms and rhel-7-server-extras-rpms
[22:19] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) Quit (Remote host closed the connection)
[22:23] <zenpac> Can I have multiple RGW servers for the same set of objects/pools?
[22:24] * nardial (~ls@dslb-178-006-188-098.178.006.pools.vodafone-ip.de) has joined #ceph
[22:25] <cholcombe> does ceph handle disk failures in a cache pool the same way as normal?
[22:27] * dyasny (~dyasny@104.158.25.230) Quit (Remote host closed the connection)
[22:28] <sasha> off_rhoden: ok, so after adding the epel repo and running the "ceph-deploy install" with " --release hammer" argument - was able to install ceph. thanks a lot
[22:28] * nardial (~ls@dslb-178-006-188-098.178.006.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[22:28] <off_rhoden> sasha: oh good. glad that worked.
[22:29] <ircolle> hrm - had to add the epel repo?
[22:29] <off_rhoden> ircolle: upstream
[22:29] <sasha> ircolle: on RHEL
[22:29] <sasha> ircolle: or register with additional channels
[22:29] <ircolle> that's an interesting use case
[22:30] <off_rhoden> ceph-deploy actually unconditionally tries to install an "epel-release" rpm, so now that I think about it adding the additional CDN repos would still fail. :/ They don't contain an epel-release package
[22:30] <cholcombe> anyone know where i can find info on the LSI ceph appliance?
[22:30] * puvo (~Kristophe@7R2AADPTL.tor-irc.dnsbl.oftc.net) Quit ()
[22:30] * mason (~spate@62SAAADRM.tor-irc.dnsbl.oftc.net) has joined #ceph
[22:32] <sasha> off_rhoden: right, so you might want to install the epel directly from URL, like: rpm -ivh https://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm (or equivalent yum/dnf commands)
[22:33] * xarses_ (~xarses@12.164.168.117) Quit (Remote host closed the connection)
[22:33] * xarses_ (~xarses@12.164.168.117) has joined #ceph
[22:34] <off_rhoden> sasha: yeah, it did that long ago... Might have to go back. If installing upstream packages, RHEL gets treated as CentOS. It is much nicer to just reference "epel-release" then have hardcoded URLs with a version number in it... but, that breaks RHEL
[22:34] <off_rhoden> with there was a epel-release-latest.noarch.rpm. :)
[22:34] <off_rhoden> s/with/wish
[22:34] * segutier (~segutier@sfo-vpn1.shawnlower.net) has joined #ceph
[22:35] <scuttlemonkey> cholcombe: the what now?
[22:35] <cholcombe> scuttlemonkey, haha
[22:35] <scuttlemonkey> I know of a couple appliances...but not LSI
[22:35] <cholcombe> scuttlemonkey, i heard through a friend that LSI had a ceph appliance. I don't remember what it's called though
[22:35] <cholcombe> scuttlemonkey, which appliances do you know of?
[22:36] <scuttlemonkey> the most notable one was the fujitsu one
[22:36] <scuttlemonkey> the CD10k
[22:36] <cholcombe> yeah i remember that one
[22:37] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[22:37] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[22:37] <scuttlemonkey> which LSI?
[22:37] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[22:38] <cholcombe> oh no i meant the fujitsu one
[22:38] <portante> hi folks, I am interested in finding out to write a protocol sniffer for Ceph OSD traffic and API traffic, is there a document that describes the protocols somewhere?
[22:38] <scuttlemonkey> cholcombe: ahh, ok
[22:39] <scuttlemonkey> cholcombe: was gonna say, we had a ref arch w/ the old LSI too, but their core group went to Avago
[22:39] <portante> I'd like to write a protocol parser for packetbeat, see https://github.com/elastic/packetbeat
[22:39] <cholcombe> scuttlemonkey, i see
[22:39] * jwilkins (~jowilkin@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[22:40] <scuttlemonkey> in that case
[22:40] <scuttlemonkey> cholcombe: http://sp.ts.fujitsu.com/dmsp/Publications/public/ds-eternus-cd10000-ww-en.pdf
[22:40] * dyasny (~dyasny@104.158.25.230) has joined #ceph
[22:40] <cholcombe> yeah i'm looking at that right now
[22:40] * sleinen (~Adium@2001:620:0:68::100) has joined #ceph
[22:40] <scuttlemonkey> cool
[22:40] <cholcombe> sandisk has one also but it's most likely going to be crazy expensive
[22:40] <cholcombe> it's all flash
[22:40] <scuttlemonkey> yeah
[22:41] <scuttlemonkey> we have a number of new reference architectures coming out as well
[22:41] <scuttlemonkey> and I know there are at least 2 more appliance offerings coming in the next 8-12 mos
[22:41] <cholcombe> oh yeah?
[22:41] <cholcombe> got a link i can check out?
[22:41] <scuttlemonkey> not yet unf
[22:41] <scuttlemonkey> data still being assembled
[22:41] <cholcombe> ok
[22:42] <scuttlemonkey> it's a separate group from me, I just hear rumblings
[22:42] <scuttlemonkey> lemme poke them and see what they have
[22:42] <cholcombe> cool thanks
[22:43] <scuttlemonkey> there is a Seagate ref arch with part of that ex-LSI-now-Seagate team
[22:43] <cholcombe> i'd check that out also if you have a link
[22:44] <scuttlemonkey> yeah, he is gonna send it to me in a few here
[22:44] <scuttlemonkey> I'll toss it up on dropbox or something
[22:44] <cholcombe> ok
[22:45] * fdmanana (~fdmanana@bl13-153-166.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[22:46] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[22:48] * kmARC (~kmARC@80-219-254-3.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:49] * sigsegv (~sigsegv@188.25.20.178) has joined #ceph
[22:54] * sleinen (~Adium@2001:620:0:68::100) Quit (Ping timeout: 480 seconds)
[22:55] * danieagle (~Daniel@187.35.202.112) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[22:56] * dopesong (~dopesong@78-60-74-130.static.zebra.lt) has joined #ceph
[23:00] * mason (~spate@62SAAADRM.tor-irc.dnsbl.oftc.net) Quit ()
[23:01] * LorenXo (~Harryhy@thoreau.gtor.org) has joined #ceph
[23:06] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[23:08] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[23:09] * fdmanana (~fdmanana@bl13-153-166.dsl.telepac.pt) has joined #ceph
[23:15] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[23:15] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[23:30] * LorenXo (~Harryhy@1ADAAABCN.tor-irc.dnsbl.oftc.net) Quit ()
[23:30] * CoZmicShReddeR (~Eric@7R2AADPWO.tor-irc.dnsbl.oftc.net) has joined #ceph
[23:32] <sasha> off_rhoden: I wonder, the admin_node in the guide - it can serve as osd/monitor/mds , right?
[23:33] <sasha> off_rhoden: no reason to waste a host just for running the commands
[23:34] <off_rhoden> sasha: yes it can. Nothing prevents it.
[23:36] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[23:36] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[23:40] * hemebond (~james@121-98-133-215.bng1.nct.orcon.net.nz) Quit (Remote host closed the connection)
[23:41] * nardial (~ls@dslb-178-006-188-098.178.006.pools.vodafone-ip.de) has joined #ceph
[23:42] * dopesong (~dopesong@78-60-74-130.static.zebra.lt) Quit (Remote host closed the connection)
[23:46] * nardial (~ls@dslb-178-006-188-098.178.006.pools.vodafone-ip.de) Quit ()
[23:49] <sasha> off_rhoden: going through the guide: "A Ceph Storage Cluster requires at least two Ceph OSD Daemons to achieve an??active??+??clean??state when the cluster makes two copies of your data (Ceph makes 2 copies by default, but you can adjust it)."
[23:49] <sasha> off_rhoden: but then in the quick install: "Change the default number of replicas in the Ceph configuration file from??3??to??2??so that Ceph can achieve an??active??+??clean??state with just two Ceph OSDs. "
[23:49] <off_rhoden> awesome. :) the currently default is definitely 3
[23:49] <sasha> off_rhoden: thanks :)
[23:50] <off_rhoden> Pull requests welcome. :)
[23:50] <sasha> off_rhoden: you mean for updating the doc?
[23:50] <off_rhoden> that's what I mean, yes.
[23:51] * dgurtner (~dgurtner@178.197.231.115) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.