#ceph IRC Log

Index

IRC Log for 2012-11-01

Timestamps are in GMT/BST.

[0:08] * jlogan1 (~Thunderbi@2600:c00:3010:1:3880:bbab:af7:6407) Quit (Ping timeout: 480 seconds)
[0:14] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) Quit (Quit: ["Textual IRC Client: www.textualapp.com"])
[0:27] <joey__> cool
[0:27] <joey__> so, FYI only the 'storage' interfaces are 10G
[0:27] <joey__> which are all eth3 on the computes
[0:32] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[0:35] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit ()
[0:36] * pentabular (~sean@adsl-70-231-143-235.dsl.snfc21.sbcglobal.net) has joined #ceph
[0:36] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[0:43] * Enigmagic (enigmo@c-50-148-128-194.hsd1.ca.comcast.net) has joined #ceph
[0:45] <Enigmagic> are there any known issues with v0.48.1 and the posix filesystem (using the kernel cephfs on 3.4.4)? i've been playing around with it and it seems to lose files sometimes
[0:45] <Enigmagic> either they disappear completely or sometimes the file contents is replaced with nulls
[0:48] <nhmlap> BOOOOOOOOOO
[0:49] * MikeMcClurg (~mike@91.224.174.75) has joined #ceph
[0:53] * Ryan_Lane (~Adium@216.38.130.163) Quit (Quit: Leaving.)
[0:53] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) has joined #ceph
[0:54] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[0:55] * maxim (~pfliu@111.192.242.55) has joined #ceph
[0:56] * Ryan_Lane (~Adium@216.38.130.163) has joined #ceph
[0:57] * maxim (~pfliu@111.192.242.55) Quit ()
[0:58] <nhmlap> Enigmagic: that's pretty strange. Haven't seen that myself.
[0:58] <Enigmagic> nhmlap: hum... i was hoping it was a known issue or fixed in some more recent kernel or ceph release :/
[0:59] * sjusthm (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) Quit (Ping timeout: 480 seconds)
[0:59] <nhmlap> Enigmagic: yeah, not that I know of, sorry. :/
[1:00] <nhmlap> Enigmagic: Some of the other guys that have done more work on cephfs might know something that I don't though...
[1:01] <Enigmagic> nhmlap: do they come in here or should i try sending a mail to the devel list?
[1:02] <nhmlap> Enigmagic: yeah, I think both Greg and Sage are in copenhagen for the ubuntu developer summit right now. Dev list might be better.
[1:03] <nhmlap> If nothing else maybe another user will chime in with a simliar story/problem.
[1:04] * buck (~buck@bender.soe.ucsc.edu) has left #ceph
[1:05] <Enigmagic> will do
[1:06] <joshd> it'd be great if you could find steps to reproduce the problem and file a bug too
[1:07] <Enigmagic> so far it seems to happen when bulk loading data into the cluster but we don't have a good repro
[1:08] <Enigmagic> some older data gets corrupted when loading new data in though, which is the less fun part
[1:09] <joshd> that's very strange, I don't remember any bugs like that either
[1:11] * sjusthm (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[1:12] <Enigmagic> i thought it was related to restarting OSDs while loading data but we just had it happen today when all the daemons have been stable
[1:13] <joshd> have you been using cephfs snapshots? that's one of the less stable areas
[1:14] <Enigmagic> nope.
[1:14] <Enigmagic> 99% read access with bulk writing at night
[1:15] <Enigmagic> no snapshots
[1:17] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[1:27] * tnt (~tnt@20.35-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[1:27] * MikeMcClurg (~mike@91.224.174.75) Quit (Ping timeout: 480 seconds)
[1:36] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has joined #ceph
[1:46] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[1:47] * Ryan_Lane (~Adium@216.38.130.163) Quit (Quit: Leaving.)
[1:48] * Ryan_Lane (~Adium@216.38.130.163) has joined #ceph
[1:48] * sjusthm (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) Quit (Remote host closed the connection)
[1:51] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has left #ceph
[1:51] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[2:00] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Quit: Leseb)
[2:03] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[2:07] * Cube1 (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[2:08] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[2:08] * Ryan_Lane (~Adium@216.38.130.163) Quit (Quit: Leaving.)
[2:16] * Ryan_Lane (~Adium@216.38.130.163) has joined #ceph
[2:20] * Ryan_Lane (~Adium@216.38.130.163) Quit ()
[2:20] * dmick (~dmick@2607:f298:a:607:746d:6b8c:2c54:3594) Quit (Quit: Leaving.)
[2:30] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[2:35] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[2:38] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[2:38] * loicd (~loic@magenta.dachary.org) has joined #ceph
[2:38] * lofejndif (~lsqavnbok@28IAAISJU.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[2:45] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[2:46] * adjohn (~adjohn@69.170.166.146) Quit (Quit: adjohn)
[2:54] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[3:00] * maxim (~pfliu@202.108.130.138) has joined #ceph
[3:00] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) Quit (Ping timeout: 480 seconds)
[3:06] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[3:07] * mdxi (~mdxi@74-95-29-182-Atlanta.hfc.comcastbusiness.net) has joined #ceph
[3:10] * iggy (~iggy@theiggy.com) Quit (Quit: No Ping reply in 180 seconds.)
[3:10] * iggy (~iggy@theiggy.com) has joined #ceph
[3:14] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:31] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[3:32] * chutzpah (~chutz@199.21.234.7) Quit (Quit: Leaving)
[3:33] * loicd (~loic@magenta.dachary.org) has joined #ceph
[3:37] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) Quit (Quit: Leaving.)
[3:50] * darkfaded (~floh@188.40.175.2) has joined #ceph
[3:55] * darkfader (~floh@188.40.175.2) Quit (Ping timeout: 480 seconds)
[3:56] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[4:08] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[4:19] * iggy2 (~iggy@theiggy.com) has joined #ceph
[4:21] * Anticimex (anticimex@netforce.csbnet.se) Quit (Remote host closed the connection)
[4:22] * Anticimex (anticimex@netforce.csbnet.se) has joined #ceph
[4:22] * benner (~benner@193.200.124.63) Quit (Remote host closed the connection)
[4:22] * glowell2 (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * rosco (~r.nap@188.205.52.204) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * gregorg (~Greg@78.155.152.6) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * stass (stas@ssh.deglitch.com) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * Qten (Q@qten.qnet.net.au) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * jmcdice_ (~root@135.13.255.151) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * wonko_be (bernard@november.openminds.be) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * rz (~root@ns1.waib.com) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * hijacker (~hijacker@213.91.163.5) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * tontsa (~tontsa@solu.fi) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * nhm_ (~nh@184-97-251-146.mpls.qwest.net) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * Meyer___ (meyer@c64.org) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * Robe (robe@amd.co.at) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * MarkS (~mark@irssi.mscholten.eu) Quit (synthon.oftc.net charon.oftc.net)
[4:22] * liiwi (liiwi@idle.fi) Quit (synthon.oftc.net charon.oftc.net)
[4:24] * rlr219 (43c87e04@ircip2.mibbit.com) Quit (Ping timeout: 480 seconds)
[4:24] * iggy (~iggy@theiggy.com) Quit (Ping timeout: 480 seconds)
[4:27] * benner (~benner@193.200.124.63) has joined #ceph
[4:27] * glowell2 (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[4:27] * rosco (~r.nap@188.205.52.204) has joined #ceph
[4:27] * gregorg (~Greg@78.155.152.6) has joined #ceph
[4:27] * stass (stas@ssh.deglitch.com) has joined #ceph
[4:27] * Qten (Q@qten.qnet.net.au) has joined #ceph
[4:27] * Robe (robe@amd.co.at) has joined #ceph
[4:27] * nhm_ (~nh@184-97-251-146.mpls.qwest.net) has joined #ceph
[4:27] * Meyer___ (meyer@c64.org) has joined #ceph
[4:27] * tontsa (~tontsa@solu.fi) has joined #ceph
[4:27] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[4:27] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) has joined #ceph
[4:27] * rz (~root@ns1.waib.com) has joined #ceph
[4:27] * wonko_be (bernard@november.openminds.be) has joined #ceph
[4:27] * jmcdice_ (~root@135.13.255.151) has joined #ceph
[4:27] * MarkS (~mark@irssi.mscholten.eu) has joined #ceph
[4:27] * liiwi (liiwi@idle.fi) has joined #ceph
[4:39] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) has joined #ceph
[5:01] * nwatkins (~nwatkins@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: leaving)
[5:10] * jjgalvez (~jjgalvez@cpe-76-175-17-226.socal.res.rr.com) has joined #ceph
[5:19] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[5:30] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[5:40] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[5:48] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[5:55] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[5:55] * nolan (~nolan@phong.sigbus.net) Quit (Quit: ZNC - http://znc.sourceforge.net)
[5:59] * tore_ (~tore@110.50.71.129) has joined #ceph
[6:03] * maxim (~pfliu@202.108.130.138) Quit (Ping timeout: 480 seconds)
[6:04] * maxim (~pfliu@202.108.130.138) has joined #ceph
[6:07] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[6:14] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) Quit (Quit: Leaving.)
[6:17] <tore_> Anyone have an idea of when the next stable release will be out?
[6:17] <tore_> Thx in advance btw
[6:27] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[6:28] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[6:31] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit ()
[6:45] * gaveen (~gaveen@112.134.112.9) has joined #ceph
[6:47] * pentabular (~sean@adsl-70-231-143-235.dsl.snfc21.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[6:47] * pentabular (~sean@adsl-70-231-143-235.dsl.snfc21.sbcglobal.net) has joined #ceph
[6:48] * pentabular is now known as Guest3982
[6:50] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) Quit (Quit: Leaving.)
[7:06] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[7:27] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[7:34] * adjohn (~adjohn@108-225-130-229.lightspeed.sntcca.sbcglobal.net) Quit (Quit: adjohn)
[7:35] <mikeryan> tore_: believe we're tagging a v0.XX release in two weeks
[7:36] <mikeryan> it will be v0.55, bobtail
[7:36] <mikeryan> actually scracth that
[7:36] <mikeryan> bobtail tagged in two weeks, released in probably about five
[7:36] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) Quit (Ping timeout: 480 seconds)
[7:37] <tore_> cool - will v0.55 be released with ubuntu packages?
[7:37] <mikeryan> we will provide .deb files, yes
[7:38] <mikeryan> i'm not sure if/when you'll be able to apt-get install it from ubuntu directly
[7:39] <tore_> good show. my managers only want me evaluating the stable releases. Right now I stuck with 0.48. Hoepfully I should be able to get a dev environment deployed in the next few days where I can deploy a daily build from github
[7:40] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Remote host closed the connection)
[7:41] <mikeryan> i believe we provide daily .debs, but i don't have the URL handy
[7:42] <mikeryan> http://gitbuilder.ceph.com/
[7:42] <mikeryan> there are ubuntu repos in there that you should be able to add to your sources.list.d
[7:42] <tore_> yep i see the prec x86_64... thank you very much this will be helpful
[7:43] <mikeryan> np
[8:16] * stass (stas@ssh.deglitch.com) Quit (Read error: Connection reset by peer)
[8:17] * stass (stas@ssh.deglitch.com) has joined #ceph
[8:20] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) has joined #ceph
[8:24] * gucki (~smuxi@46-127-158-51.dynamic.hispeed.ch) Quit (Remote host closed the connection)
[8:25] * ramsay_za (~ramsay_za@41.215.234.234) has joined #ceph
[8:26] * iggy2 is now known as iggy
[8:39] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[8:41] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[8:41] * bchrisman (~Adium@c-76-103-130-94.hsd1.ca.comcast.net) has joined #ceph
[8:41] * loicd (~loic@magenta.dachary.org) has joined #ceph
[8:52] * Ryan_Lane (~Adium@c-67-160-217-184.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:52] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[8:57] * tnt (~tnt@20.35-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[9:12] * gaveen (~gaveen@112.134.112.9) Quit (Remote host closed the connection)
[9:17] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:18] * MikeMcClurg (~mike@91.224.175.20) has joined #ceph
[9:25] * stxShadow (~Jens@ip-178-203-169-190.unitymediagroup.de) has joined #ceph
[9:28] * tnt (~tnt@20.35-67-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[9:28] * stxShadow (~Jens@ip-178-203-169-190.unitymediagroup.de) has left #ceph
[9:34] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) Quit (Read error: Connection reset by peer)
[9:38] * Karcaw (~evan@68-186-68-219.dhcp.knwc.wa.charter.com) has joined #ceph
[9:40] <todin> morning 'ceph
[9:41] <ramsay_za> morning
[9:43] * Nrg3tik (~Nrg3tik@78.25.73.250) has joined #ceph
[9:43] * Nrg3tik (~Nrg3tik@78.25.73.250) has left #ceph
[9:44] <dweazle> anyone here attending the ceph workshop in amsterdam tomorrow? :)
[9:46] <todin> dweazle: I do
[9:50] * tnt (~tnt@212-166-48-236.win.be) has joined #ceph
[9:50] <dweazle> coming all the way from .de? :)
[9:50] <dweazle> that's cool
[9:51] <todin> dweazle: ep from berlin
[9:51] <todin> dweazle: s/ep/yep/
[9:53] <dweazle> i think it'll be interesting, i don't have any hands on experience with ceph yet
[9:54] <todin> dweazle: I hope so, I have a cluster running for the last to years
[9:55] <dweazle> i only found out this morning that there are actually rpm's for rhel^Wcentos 6 on gitmaster
[9:55] <dweazle> what do you use it mainly for?
[9:55] <todin> for vm (cloud) hosting
[9:55] <dweazle> ah, which company?
[9:55] <dweazle> in-berlin?
[9:56] <dweazle> do you use kvm+rbd?
[9:56] <ramsay_za> what's your cluster look like, server size, networking?
[9:57] * ctrl (~Nrg3tik@78.25.73.250) has joined #ceph
[9:57] <todin> dweazle: yes, kvm+rbd, the company is called strato strato.de
[9:57] <ramsay_za> running a cluster this on 3 36 disk with 10Gb networking
[9:57] <dweazle> ah strato
[9:57] * loicd (~loic@magenta.dachary.org) Quit (Quit: Leaving.)
[9:58] <dweazle> todin: we intend to use it for vm hosting as well
[9:58] <todin> ramsay_za: the cluster looks very simillar, 24 Disks Nodes and dual 10GeB
[9:59] <todin> dweazle: where are you from?
[9:59] <dweazle> todin: netherlands / tilaa
[9:59] <ramsay_za> how many nodes?
[9:59] <todin> ramsay_za: atm 8 Nodes
[10:01] <ramsay_za> you running separate mon servers or do a few just run both osds and mon?
[10:01] <todin> I mons and osd on the same node
[10:03] <ramsay_za> cool, full sata? any perf stats for your cluster?
[10:04] <todin> ramsay_za: I use sas disks for the store and sata intel ssds for the journal I tried diffrent raidcontroller, but atm I use a simple SAS HAB the performance is better, and it is cheaper
[10:05] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[10:06] <ramsay_za> any burn out on the ssds? we kill them in months
[10:06] <dweazle> todin: are you using MLC ssd's? how are they holding up?
[10:06] <dweazle> ramsay_za: heh same thought
[10:06] <ramsay_za> lol, yeah, it's the biggest issue with ssd atm
[10:07] <tontsa> so what happens when that journal ssd breaks?
[10:07] <todin> atm I use the 520 Intel ssd, they wear out quit quickly. but the cluster is not productiv yes. in few month there comes a new intel ssd dc s3700
[10:07] <todin> tontsa: your osd will break
[10:08] <ramsay_za> we switched to ocz vertex 4 which gave us a ton more life
[10:09] <ramsay_za> yep if your journal pegs your osd offlines, have to monitor the hell out of it if you use ssd
[10:09] <ramsay_za> also not prod this side just doing burn in testing
[10:09] <todin> ramsay_za: do you have any expierence how good the smartctrl wear out info is?
[10:10] <tnt> are you using all the ssd space ? I mean if you make a partition of 1/4 of the disk and never use the rest, it should substancially increase their lives shouldn't it ?
[10:10] <ramsay_za> pretty accurate also if you monitor dmesg for delayed writes it's a nice indicator
[10:11] <dweazle> does ceph provide decent performace if you put the journals on a hdd raid-1 instead?
[10:11] <dweazle> or is it very latency sensitive?
[10:11] <todin> tnt: I use a 50G partition on a 200G ssd
[10:12] <todin> dweazle: for hosting ceph is latency sensetiv
[10:13] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) has joined #ceph
[10:13] <dweazle> if ssd really is a must then my plan to make each of my VM hosts a ceph osd is not really practical without either investing huge cash in slc or replacing mlc every 4-6 months..
[10:13] <dweazle> might be better of building a couple of big storage nodes instead
[10:14] <ramsay_za> ideally your journal should be faster than your osd so a raid 10 journal would give you that, if your journal is the same speed as your osd you are not gonna see any perf gain with using journal
[10:14] <ramsay_za> the newer 4th gen ssds have a much higher mtf so they should last a year or 2
[10:15] * Leseb (~Leseb@193.172.124.196) has joined #ceph
[10:15] <todin> dweazle: wait for the new high endurance mlc, or for cheaper slc vendor, we have locate a few china company which make quite nice slc ssd
[10:15] <dweazle> ah, interesting
[10:16] <dweazle> ok, next thing i'm wondering.. is it better to put on osd on a raid set or to start an osd for each hdd independently?
[10:16] * loicd (~loic@178.20.50.225) has joined #ceph
[10:16] <ramsay_za> they journal is extremely chatty so it burns cells fast so put a drive in that has cell life management
[10:17] <ramsay_za> I do one osd per drive, let ceph do the rep
[10:17] <dweazle> that seems the most optimal, but i wonder how ceph would respond to a partially failing disk
[10:18] <dweazle> like, bad sectors, high latency .. most raid controllers notice that and kick it out of the array, all that intelligence is lost if you revert to JBOD
[10:19] <ramsay_za> in my testing as soon as there is a series of bad read write transactions the osd offlines
[10:20] <dweazle> that's good to know
[10:21] <ramsay_za> the fact that vendors are putting 5 year warranties on ssds now is reassuring, means if you do burn one you don't have to fork out cash money for a new one
[10:21] <ramsay_za> todin: did you try running on 1Gb nics at any point?
[10:21] <tnt> btw, do you guys use a single ssd for multiple osd drives or one ssd per osd drive ?
[10:22] <dweazle> ramsay_za: the way i understand it is that the warranty says either 5 years or when it wears out, whichever comes first
[10:22] <ramsay_za> one ssd for multiple osds
[10:23] <ramsay_za> dweazle: just reading the OCZ warranty and no mention of duty cycle yet
[10:24] <dweazle> ok:)
[10:32] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) Quit (Remote host closed the connection)
[10:32] * silversurfer (~silversur@124x35x68x250.ap124.ftth.ucom.ne.jp) has joined #ceph
[10:35] <dweazle> mm dead link to the dc s3700 ssd on the intel website (404).. meh
[10:42] <ramsay_za> I try an avoid intel more of a personal thing though
[10:43] * maxim (~pfliu@202.108.130.138) Quit (Remote host closed the connection)
[10:44] <dweazle> i just want affordable endurance, don't care where it comes from really
[10:44] <dweazle> but intel is probably not my best bet at this point
[10:48] <todin> dweazle: I use intel, because our contact to them is quite good
[10:51] <dweazle> todin: do you get large discounts on list prices? :)
[11:03] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[11:16] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[11:17] * deepsa_ (~deepsa@101.63.250.13) has joined #ceph
[11:18] <todin> ramsay_za: in the beginning i used 1GbE, but changed quickly to 10GbE
[11:19] <todin> dweazle: I think the discounts are not so great, but we have a nice exchange policy of broken ssds
[11:21] * deepsa (~deepsa@122.172.1.71) Quit (Ping timeout: 480 seconds)
[11:21] * deepsa_ is now known as deepsa
[11:22] <ramsay_za> todin: running arista?
[11:25] <todin> ramsay_za: yep, but now trying mellanox
[11:25] <todin> ramsay_za: same performance, but much cheaper
[11:27] <ramsay_za> todin: SFP or copper rj45?
[11:27] <todin> ramsay_za: SFP+ direct attached copper, rj45 has a higher latnecy
[11:32] <ramsay_za> todin: done your homework I see yeah we are moving to twinax due to the better perf
[11:33] <todin> ramsay_za: are you comming tomorrow to the workshop?
[11:34] <dweazle> how much storage traffic would you expect to see from a typical VM host (with 96GB RAM, running 60 VM's or so)
[11:34] <todin> dweazle: i really depends on the workload of the vms.
[11:35] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[11:37] <dweazle> todin: of course, but would 2gbit suffice for a typical workload on such a vm host?
[11:37] * MikeMcClurg (~mike@91.224.175.20) Quit (Ping timeout: 480 seconds)
[11:37] <dweazle> i really haven't got a clue
[11:39] <todin> dweazle: a normal hard disk has a bandwidth of 100MB. so you could "emulate" two disk, under an full load test.
[11:39] <tnt> personally I try to minimize it ... all bulk data are stored on radosgw rahter than the vm filesystem, and the DBs are separate servers with physical disk, so IO from the vm fs is really minimal.
[11:39] <todin> but the best way is, just to test it, or if the machine is atm dedicated look at iostat
[11:40] <todin> tnt: yeah, thats a nice design choice
[11:43] <dweazle> well, if i simply take a look at peak bandwidth usage on disk i/o it shouldn't really exceed 1gbit all that often, so i guess a 2gbit trunk should suffice
[11:44] <dweazle> but ceph might add some overhead
[11:44] <dweazle> blocks need to be written to two different osd's at least
[11:45] <todin> dweazle: yes the osd to osd conection is important as well
[11:45] <todin> gotta catch my flight to amsterdam, someone this afternoon there?
[11:46] <dweazle> i'll be there tomorrow morning
[11:46] <dweazle> thanks for the pointers and have a good trip
[11:49] <todin> dweazle: thanks, we still can talk about it tomorrow.
[11:56] <dweazle> if i see you i will :) cheers
[12:02] * tryggvil__ (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[12:05] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Ping timeout: 480 seconds)
[12:05] * tryggvil__ is now known as tryggvil
[12:07] * yoshi (~yoshi@p37219-ipngn1701marunouchi.tokyo.ocn.ne.jp) Quit (Remote host closed the connection)
[12:15] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) has joined #ceph
[12:19] * gaveen (~gaveen@112.134.112.233) has joined #ceph
[12:23] * drokita (~drokita@24-107-180-86.dhcp.stls.mo.charter.com) Quit (Ping timeout: 480 seconds)
[12:25] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has joined #ceph
[12:26] * MikeMcClurg (~mike@91.224.175.20) has joined #ceph
[12:26] * EmilienM (~EmilienM@195-132-228-252.rev.numericable.fr) has left #ceph
[12:31] * maxim (~pfliu@222.128.144.42) has joined #ceph
[12:40] * gaveen (~gaveen@112.134.112.233) Quit (Remote host closed the connection)
[12:40] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[12:48] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[13:07] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Quit: tryggvil)
[13:14] <ramsay_za> todin: nope not gonna make it.
[13:14] * MikeMcClurg (~mike@91.224.175.20) Quit (Ping timeout: 480 seconds)
[13:46] * Guest3982 is now known as pentabular
[14:00] * MikeMcClurg (~mike@91.224.175.20) has joined #ceph
[14:11] * pentabular (~sean@adsl-70-231-143-235.dsl.snfc21.sbcglobal.net) Quit (Remote host closed the connection)
[14:18] * pentabular (~sean@70.231.143.235) has joined #ceph
[14:27] * drokita (~drokita@199.255.228.10) has joined #ceph
[14:31] * nhmlap (~nhm@184-97-251-146.mpls.qwest.net) Quit (Quit: Lost terminal)
[14:37] * nhorman (~nhorman@nat-pool-rdu.redhat.com) has joined #ceph
[14:41] * deepsa (~deepsa@101.63.250.13) Quit (Ping timeout: 480 seconds)
[14:42] <tnt> Mmm, so if I understand correctly if the size of a pool is 2 that means each object is replicated 3 times ?
[14:42] <tnt> ( I'm looking at http://ceph.com/docs/master/cluster-ops/pools/ )
[14:43] * deepsa (~deepsa@122.172.170.221) has joined #ceph
[14:50] <ramsay_za> nope that will mean there are 2 copies in the env
[14:50] <ramsay_za> a size of 3 means there are 3 copies
[14:52] <tnt> Ok. The doc is a bit misleading.
[14:52] <tnt> see the last sentence of the link
[14:54] <ramsay_za> hmm, interesting point, the older docs don't make that statement. let me test this quick.
[14:57] <elder> Man, gmail is being VERY SLOW for me to day.
[14:58] <ramsay_za> you are indeed correct that a size of 2 means that an additional 2 copies will be created when the object is written
[15:00] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) has joined #ceph
[15:02] <tnt> ramsay_za: that's still strange because a ceph pg dump only shows 2 osd per PG for a pool of size=2 ...
[15:02] * PerlStalker (~PerlStalk@perlstalker-1-pt.tunnel.tserv8.dal1.ipv6.he.net) has joined #ceph
[15:02] <tnt> or maybe that changed recently ? I'm using 0.48.2 here
[15:03] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[15:04] <ramsay_za> looks like it has always been that way, don't see it in any of the changelogs
[15:04] <tnt> but then why do pg dump only lists 2 osd ?
[15:05] * MikeMcClurg (~mike@91.224.175.20) Quit (Read error: No route to host)
[15:21] <ramsay_za> just looking up the code that actually does this to try get to the bottom of this quick, also a little confused
[15:22] <ramsay_za> from the code # size is the number of copies; primary+replicas
[15:22] <ramsay_za> http://ceph.com/docs/master/dev/placement-group/
[15:23] <ramsay_za> so a size of 2 means that there are 2 identical items, i.e. the primary and a replica
[15:24] <tnt> Ok, so that's what I originally though ... but the doc said other wise so that confused me :p
[15:25] <ramsay_za> yep confused the both of us, I'll see if any of the doc maintainers are online so they can do an update
[15:29] * long (~chatzilla@118.186.58.35) has joined #ceph
[15:30] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has left #ceph
[15:31] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[15:37] <tnt> Damn, I've created the cluster 3 times now and I can't get pg_bits to work ...
[15:38] <tnt> I have osd pg bits = 7 in a [global] section on all the nodes ... what else do I need ?
[15:38] <tnt> I tried [general] as well
[15:38] * MikeMcClurg (~mike@91.224.175.20) has joined #ceph
[15:42] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) has joined #ceph
[15:43] * long (~chatzilla@118.186.58.35) Quit (Remote host closed the connection)
[15:43] * josef (~seven@li70-116.members.linode.com) has joined #ceph
[15:43] <josef> did you guys kill obsync?
[15:45] <josef> ah yeah there it is
[15:45] <tnt> I hope not ... that's my planned backup utils
[15:46] <josef> well just out of hte main ceph repo
[15:48] * long (~chatzilla@118.186.58.35) has joined #ceph
[16:03] * buck (~buck@bender.soe.ucsc.edu) has joined #ceph
[16:04] * MikeMcClurg (~mike@91.224.175.20) Quit (Ping timeout: 480 seconds)
[16:16] * aliguori (~anthony@cpe-70-123-145-75.austin.res.rr.com) Quit (Quit: Ex-Chat)
[16:20] <tnt> Interestng ... the read spead of rbd drive that has never been written sucks ... a lot.
[16:24] * yeled (~yeled@spodder.com) has joined #ceph
[16:24] <yeled> re
[16:24] * danieagle (~Daniel@177.133.175.201) has joined #ceph
[16:28] * hhoover (~hhoover@of2-nat1.sat6.rackspace.com) has joined #ceph
[16:32] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[16:34] * jlogan1 (~Thunderbi@2600:c00:3010:1:3880:bbab:af7:6407) has joined #ceph
[16:35] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) Quit (Quit: Leaving.)
[16:52] * aliguori (~anthony@32.97.110.59) has joined #ceph
[16:53] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) has joined #ceph
[16:53] * rweeks (~rweeks@c-98-234-186-68.hsd1.ca.comcast.net) has joined #ceph
[17:04] * davidz (~Adium@ip68-96-75-123.oc.oc.cox.net) has joined #ceph
[17:12] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[17:13] * jluis (~androirc@197.20.43.5.rev.vodafone.pt) has joined #ceph
[17:13] <joey__> I'm having an issue with the RHEL based ceph rc script - where it doesn't start my ceph service. :(
[17:15] * long (~chatzilla@118.186.58.35) Quit (Quit: ChatZilla 0.9.89 [Firefox 16.0.2/20121024073032])
[17:15] * jluis (~androirc@197.20.43.5.rev.vodafone.pt) has left #ceph
[17:16] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:17] <joey__> it's also kind of hard to debug because of the lack of output
[17:17] <mikeryan> tnt: osd pg bits goes in the [osd] section
[17:17] <mikeryan> and yep, those docs are wrong
[17:18] <mikeryan> well, misleading
[17:18] <mikeryan> "two replicas" means "two replicas in addition to the primary"
[17:18] <mikeryan> so size = 3
[17:18] <tnt> mikeryan: well, I just dropped it for now. I'm not using cephfs anyway so I just recreated a rbd pool manually and set pg_num at creation time to put my images
[17:19] <mikeryan> joey__: i'm not familiar with rhel, can you manually run the script?
[17:19] <joey__> so, in rhel..
[17:20] <joey__> there are a set of init scripts in /etc/init.d/
[17:20] <joey__> for ceph there are two: ceph and ceph-radosgw
[17:20] <joey__> these are both configured to start on boot (run levels 2,3,4,5 )
[17:21] <joey__> so, running: /etc/init.d/ceph start
[17:21] <joey__> produces no output but returns 0 as the return code
[17:21] <joey__> but, there's no ceph processes running.
[17:21] <joey__> and not output is returned
[17:21] <rweeks> nothing in logs?
[17:22] <mikeryan> joey__: that's probably written in bash
[17:22] <mikeryan> you can try bash -v /etc/init.d/ceph start
[17:22] <mikeryan> and see if it bombs out early or something
[17:22] <joey__> nothing in the logs..
[17:22] <joey__> ok, lemme poke around there.
[17:24] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:30] * sjusthm (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:33] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:37] <todin> reloc in amsterdam
[17:41] * tnt (~tnt@212-166-48-236.win.be) Quit (Ping timeout: 480 seconds)
[17:44] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) has joined #ceph
[17:44] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[17:45] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:50] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[17:50] * tnt (~tnt@20.35-67-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[17:58] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[18:03] <joey__> hey
[18:04] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[18:05] <todin> hi
[18:11] <slang> joey__, jjgalvez: talked with sage about your client hangs, it sounds like we should enable client and message debug logging on those fuse clients
[18:12] * chutzpah (~chutz@199.21.234.7) has joined #ceph
[18:14] * joao-mobile (~androirc@197.20.43.5.rev.vodafone.pt) has joined #ceph
[18:16] <jtang> right ceph-workshop tomorrow in amsterdam
[18:16] <jtang> my colleagues are gonna be so wrecked tomorrow :P
[18:25] <joey__> slang: ok
[18:25] <joey__> slang: I was hoping to schedule a call with you guys to talk out a few things.
[18:25] <joey__> asked our PM to get it setup.
[18:31] * joao (~JL@89.181.150.224) Quit (Remote host closed the connection)
[18:39] * Leseb (~Leseb@193.172.124.196) Quit (Quit: Leseb)
[18:39] * loicd (~loic@178.20.50.225) Quit (Ping timeout: 480 seconds)
[18:41] * joao-mobile (~androirc@197.20.43.5.rev.vodafone.pt) Quit (Ping timeout: 480 seconds)
[18:41] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[18:41] * nwatkins (~Adium@soenat3.cse.ucsc.edu) has joined #ceph
[18:44] * BManojlovic (~steki@212.200.241.231) has joined #ceph
[18:45] * adjohn (~adjohn@69.170.166.146) has joined #ceph
[18:50] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[18:50] * houkouonchi-work (~linux@12.248.40.138) Quit ()
[18:52] <slang> joey__: ok
[18:52] * houkouonchi-work (~linux@12.248.40.138) has joined #ceph
[18:53] * Ryan_Lane (~Adium@c-50-143-138-153.hsd1.ca.comcast.net) has joined #ceph
[18:56] * tryggvil (~tryggvil@rtr1.tolvusky.sip.is) Quit (Ping timeout: 480 seconds)
[19:08] * dmick (~dmick@2607:f298:a:607:c08e:825e:dcf3:e507) has joined #ceph
[19:10] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) has joined #ceph
[19:12] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:13] * Leseb_ (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) has joined #ceph
[19:13] * Leseb (~Leseb@5ED01FAC.cm-7-1a.dynamic.ziggo.nl) Quit (Read error: Connection reset by peer)
[19:13] * Leseb_ is now known as Leseb
[19:28] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) Quit (Quit: Leaving.)
[19:29] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) has joined #ceph
[19:29] * jmlowe (~Adium@c-71-201-31-207.hsd1.in.comcast.net) Quit ()
[19:31] * loicd (~loic@90.84.146.228) has joined #ceph
[19:32] <elder> joshd, ceph/wip-xfstests is ready for your review :)
[19:33] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) Quit (Ping timeout: 480 seconds)
[19:33] <elder> dmick, you too.
[19:33] <elder> Also, perhaps confusingly, teuthology/wip-xfstests is also available for review.
[19:33] <elder> The ceph commit has to be done before the teuthology one.
[19:35] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) Quit (Ping timeout: 480 seconds)
[19:36] * sjusthm (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) Quit (Ping timeout: 480 seconds)
[19:37] <joshd> elder: in run_xfstests, it should exit with the status if it's non-zero after each run
[19:37] <elder> I tried that, and it does.
[19:37] <elder> The set -e early in the script takes care of it I think.
[19:37] <joshd> ah, it's using set -e
[19:37] <joshd> right
[19:38] <elder> That came from Tommi.
[19:38] <joshd> that just means the final exit ${status} will never run except when status is 0
[19:38] <elder> I never used "set -e" before but I see he swears by it
[19:38] <elder> I hadn't thought of that...
[19:39] <elder> I don't mind getting rid of "set -e"
[19:39] <elder> And adding a test of status after each iteration.
[19:39] <joshd> no, I'd rather keep set -e
[19:39] <elder> OK.
[19:40] <joshd> the explicit exit at the end just confused me
[19:40] <elder> Most likely there before "set -e" was added.
[19:40] <joshd> yeah
[19:40] <joshd> so the ceph change is fine then
[19:41] * Ryan_Lane (~Adium@c-50-143-138-153.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[19:41] <elder> Do I just add your reviewed-by and commit it myself then?
[19:41] <joshd> the teuthology one I'd like to see some text near the example in the comment for def task() that explains what count does
[19:41] <dmick> ceph change makes sense to me too
[19:41] <joshd> elder: yes
[19:42] <joshd> otherwise the teuthology change looks fine to me
[19:42] <elder> joshd, I'll add a description. I thought I had done that, but I think it was just in the shell script
[19:43] * loicd (~loic@90.84.146.228) Quit (Ping timeout: 480 seconds)
[19:44] <dmick> I really am growing to dislike '{field}'
[19:44] <dmick> .format(field=field)
[19:44] <dmick> it's so damn verbose
[19:44] * CristianDM (~CristianD@host173.186-124-185.telecom.net.ar) has joined #ceph
[19:45] <dmick> not that I'm suggesting a change. just whining.
[19:45] <CristianDM> Hi.
[19:45] <elder> Maybe if I didn't use the same "field" with different syntactic purpose it wouldn't seem so wrong.
[19:45] <CristianDM> I'm a ceph cluster with RBD usage.
[19:45] <dmick> elder: no, it's a standard idiom
[19:45] <dmick> I just don't like it
[19:45] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[19:45] <dmick> and hi CristianDM
[19:45] <CristianDM> Is it possible add mds, and work with cephfs and RBD at the same time?
[19:45] <elder> "You don't like {it}".format(it=it)
[19:46] <dmick> exactly exactly exactly.
[19:46] * wer (~wer@wer.youfarted.net) has joined #ceph
[19:46] <dmick> CristianDM: yes, the cluster can support multiple methods of access simultaneously
[19:46] <elder> But I'm obviously no Python programmer or I would have known to use single quotes.
[19:46] <dmick> by default mds/cephfs use the data/metadata pools, and rbd uses the rbd pool, so they're also segregated at that level
[19:47] <elder> joshd: By default, the set of tests specified is run once.
[19:47] <elder> If a (non-zero) count value is supplied, the complete set of
[19:47] <elder> tests will be run that number of times.
[19:47] <CristianDM> dmick: Perfect. And for mds, I add the parameters to ceph.conf but I need some special command to create this?
[19:48] <joshd> elder: sounds good
[19:49] <dmick> elder: teuthology changes make sense to me as well
[19:50] <elder> OK thanks guys.
[19:50] <elder> I have pushed the ceph one to master.
[19:51] <elder> Is there any reason I should wait on teuthology for a bit? It *must* have the change in place or it will break.
[19:51] <elder> I suppose I ought to have written it differently so the option is only added if it was specified...
[19:51] <elder> Dayum.
[19:51] <dmick> CristianDM: yes, mkcephfs can set up an mds. ceph-deploy, the newer toolset, looks like it doesn't handle mds yet.
[19:52] * Ryan_Lane (~Adium@c-50-143-138-153.hsd1.ca.comcast.net) has joined #ceph
[19:52] <CristianDM> dmick: But how run mkcephfs only to create an mds. I canĀ“t see in the docs instructions for it
[19:52] <dmick> see comments at the head of the file
[19:52] <dmick> and it's just shell, so you could do the steps manually as well
[19:52] <joshd> elder: no, the scheduled runners don't update teuthology automatically, so it's fine to push it whenever
[19:53] <elder> OK. I'll do that then. Thanks a lot.
[19:53] * nhorman (~nhorman@nat-pool-rdu.redhat.com) Quit (Quit: Leaving)
[19:55] <rweeks> would radosgw then use a different pool from rbd and cepfs (I would think so)
[19:55] <dmick> rweeks: yeah, several IIRC
[19:55] <rweeks> right, that would make sense to mne
[19:55] <rweeks> _me_
[19:55] * lofejndif (~lsqavnbok@04ZAABB2H.tor-irc.dnsbl.oftc.net) has joined #ceph
[19:56] <rweeks> and if you're talking to something from an app using librados, you'd probably want to use a different pool as well
[19:56] * rweeks is just groking out the architecture of a multi-use ceph deployment
[19:57] <dmick> pools are both a namespace and an authorization partition, so yeah
[19:57] <dmick> you can also have different replication factors per pool
[19:57] * jmlowe (~Adium@2001:18e8:2:28a2:40b7:c683:af0f:15df) has joined #ceph
[19:58] <dmick> (and different number-of-pgs)
[19:58] <dmick> elder: and you can remove your remote branches at will once merged. (I often forget)
[19:58] <elder> OK.
[20:00] <rweeks> hm. what if you had, say, a python app writing objects via librados, and an app reading those objects via S3 API
[20:00] <rweeks> would that even work?
[20:00] <mikeryan> rados objects != rgw objects
[20:01] <dmick> right
[20:01] <dmick> I think you *might* be able to write with Swift, read with S3, or vice-versa, but
[20:01] <dmick> there's a lot of bookkeeping stuff about rgw objects that's stored in RADOS
[20:01] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) Quit (Ping timeout: 480 seconds)
[20:03] <joshd> dmick: yes, you can. there's some funkiness with acls (I think applying the intersection if acls are specified with both protocols), but plain read/write is easy
[20:03] <rweeks> ok
[20:03] <rweeks> I thought that was likely the case
[20:17] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[20:20] * tryggvil (~tryggvil@163-60-19-178.xdsl.simafelagid.is) Quit (Quit: tryggvil)
[20:21] * Cube (~Cube@cpe-76-95-223-199.socal.res.rr.com) Quit (Quit: Leaving.)
[20:23] * jtang1 (~jtang@79.97.135.214) has joined #ceph
[20:24] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[20:27] * CristianDM (~CristianD@host173.186-124-185.telecom.net.ar) Quit (Ping timeout: 480 seconds)
[20:27] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[20:27] * CristianDM (~CristianD@host173.186-124-185.telecom.net.ar) has joined #ceph
[20:30] * slang (~slang@ace.ops.newdream.net) has left #ceph
[20:30] * slang1 (~Thunderbi@ace.ops.newdream.net) has joined #ceph
[20:33] * Ryan_Lane (~Adium@c-50-143-138-153.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[20:35] * slang (~slang@ace.ops.newdream.net) has joined #ceph
[20:35] * slang (~slang@ace.ops.newdream.net) has left #ceph
[20:37] * slang (~slang@ace.ops.newdream.net) has joined #ceph
[20:37] * slang (~slang@ace.ops.newdream.net) has left #ceph
[20:39] * slang (~slang@ace.ops.newdream.net) has joined #ceph
[20:39] * slang (~slang@ace.ops.newdream.net) has left #ceph
[20:42] <Robe> wee
[20:42] <Robe> touchdown amsterdam hotel
[20:44] * slang1 (~Thunderbi@ace.ops.newdream.net) Quit (Quit: slang1)
[20:50] * slang (~slang@ace.ops.newdream.net) has joined #ceph
[20:51] <todin> Robe: in which hotel?
[20:51] <Robe> Nova Hotel
[20:52] <Robe> Nieuwezijds Voorburgwal 276,
[20:53] <todin> Robe: you walk like 5 min to the venue?
[20:54] * Cube (~Cube@12.248.40.138) has joined #ceph
[20:56] * sjustlaptop (~sam@ma20436d0.tmodns.net) has joined #ceph
[20:56] <Robe> todin: I hope so :)
[20:57] <Robe> you staying in the vicinity?
[20:58] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[20:59] <todin> not really, my company has strange travel rules
[21:01] <Robe> which are?
[21:02] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[21:05] <todin> Robe: quite difficult to explain, we have a very strange online booking tool, and the choices about the hotels are very limit, but on the other taxi fees doen't matter
[21:06] <Robe> hehehe
[21:06] <Robe> so you're staying in schiphol?
[21:06] <Robe> or germany? :>
[21:07] <todin> in Noord
[21:10] <Robe> ferry tiem!
[21:11] * jmlowe (~Adium@2001:18e8:2:28a2:40b7:c683:af0f:15df) Quit (Quit: Leaving.)
[21:11] <todin> Robe: that means what?
[21:12] <Robe> there are cute little ferries that drive between central and noord ;)
[21:13] <todin> really? where could I catche them?
[21:13] <NaioN> at several points
[21:13] <NaioN> where are you in noord?
[21:15] <todin> in the NH Galaxy hotel
[21:16] * jmlowe (~Adium@140-182-128-248.dhcp-bl.indiana.edu) has joined #ceph
[21:16] <NaioN> you could take it from Buiksloterweg to Central Station
[21:16] <NaioN> that's opposite of the channel
[21:17] <NaioN> and from there it's a 10min walk
[21:17] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[21:17] <NaioN> but I think it will take you also about 10min to get from your hotel to the ferry (waking)
[21:17] <NaioN> walking...
[21:18] * stass (stas@ssh.deglitch.com) Quit (Ping timeout: 480 seconds)
[21:20] <todin> yep, I Just looked it up in google maps
[21:20] * jmlowe (~Adium@140-182-128-248.dhcp-bl.indiana.edu) Quit ()
[21:20] * lofejndif (~lsqavnbok@04ZAABB2H.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[21:21] * danieagle (~Daniel@177.133.175.201) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[21:22] <NaioN> well we're coming from the south of the Netherlands with the train
[21:22] * stass (stas@ssh.deglitch.com) has joined #ceph
[21:23] <todin> so you are arrving tomorrow?
[21:24] <NaioN> well I live in the Netherlands... so it's not really arriving...
[21:24] <todin> ;-)
[21:25] <dmick> the train schedule would probably disagree :)
[21:25] <NaioN> but normally I would go by car, but downtown Amsterdam is a real pain with the car :)
[21:25] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[21:26] <NaioN> dmick: hope not! :)
[21:26] <dmick> if there's no departure and no arrival, I don't understand what a train is...
[21:26] <rweeks> a tesseract.
[21:26] <rweeks> <.<
[21:27] <dmick> maybe it uses Platform 8 3/4
[21:27] <rweeks> by definition, if you don't depart and don't arrive, you must have used the shortest distance between two points.
[21:27] <NaioN> dmick: wasn't it 9 3/4?
[21:27] <dmick> could be
[21:27] * senner (~Wildcard@68-113-232-90.dhcp.stpt.wi.charter.com) Quit (Quit: Leaving.)
[21:28] <NaioN> well most will come by plane the day before I assume, hence the arriving...
[21:28] <dmick> hah, it actually exists apparently. cool. http://golondon.about.com/od/londonpictures/ig/Less-seen-Sights/Platform-9-3-4.htm
[21:28] <NaioN> hehe
[21:29] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[21:32] * sjustlaptop (~sam@ma20436d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[21:37] <elder> dmick, do you know anything about autobuilder? I don't seem to be getting any updated results here: http://gitbuilder.sepia.ceph.com/gitbuilder-precise-kernel-amd64/
[21:38] <dmick> I do, and I can look
[21:38] <dmick> did you check the status page?
[21:38] <dmick> oh wait, that is the status page, sorry
[21:38] <elder> You mean http://ceph.com/gitbuilder.cgi
[21:38] <elder> Looks like same answer
[21:38] <dmick> no, that one's fine
[21:38] <dmick> which branch?
[21:39] <elder> I just updated wip-testing to 4be27cc
[21:39] <elder> I also deleted kill-mount-secret and uml-cleanup
[21:47] * Ryan_Lane (~Adium@216.38.130.163) has joined #ceph
[21:57] * sjustlaptop (~sam@68-119-138-53.dhcp.ahvl.nc.charter.com) has joined #ceph
[22:13] * verwilst (~verwilst@dD5769628.access.telenet.be) has joined #ceph
[22:17] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[22:25] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[22:30] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[22:31] * scalability-junk (~stp@188-193-211-236-dynip.superkabel.de) has joined #ceph
[22:35] * nwatkins1 (~Adium@soenat3.cse.ucsc.edu) has joined #ceph
[22:35] * nwatkins (~Adium@soenat3.cse.ucsc.edu) Quit (Read error: Connection reset by peer)
[22:46] * drokita (~drokita@199.255.228.10) has left #ceph
[23:01] * miroslav (~miroslav@c-98-248-210-170.hsd1.ca.comcast.net) has left #ceph
[23:11] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[23:12] * hhoover (~hhoover@of2-nat1.sat6.rackspace.com) Quit (Quit: ["Bye"])
[23:14] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) has joined #ceph
[23:16] * miroslav (~miroslav@173-228-38-131.dsl.dynamic.sonic.net) has joined #ceph
[23:22] * verwilst (~verwilst@dD5769628.access.telenet.be) Quit (Quit: Ex-Chat)
[23:28] * synapsr (~Adium@c-69-181-244-219.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[23:29] * jmlowe (~Adium@99.111.157.233) has joined #ceph
[23:34] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Read error: Connection reset by peer)
[23:35] * loicd (~loic@magenta.dachary.org) has joined #ceph
[23:38] * aliguori (~anthony@32.97.110.59) Quit (Remote host closed the connection)
[23:39] <dec> Can anyone clarify ipv6 support within Ceph (mon/osd/librados)?
[23:40] <dec> I can see (in msg_types.h) that the entity_addr_t seems to support ipv6 addresses, however there's also a comment slightly above which says 'ipv4 for now'...
[23:41] <joshd> that comment is obsolete. ipv6 has been supported for a long time
[23:42] <joshd> iirc dualstack is not (at least for the daemons)
[23:42] <dec> If I give 'mon addr' an ipv6 address, it still listens on an ipv4 address
[23:43] <dec> If I change such a setting in ceph.conf, do I need to rebuild a monmap or something?
[23:43] <joshd> monitors are the one thing that has a well-known address, stored in the monmap
[23:44] <joshd> the easiest way is to add a new mon with the new ip, and remove the old one
[23:44] <dec> ah, OK cool.
[23:45] <dec> # ceph mon delete c
[23:45] <dec> unknown command delete
[23:45] <dec> doh!
[23:48] <joshd> ceph mon remove, it turns out
[23:53] <dec> ah, perhaps I should submit a patch for "ceph --help"
[23:53] <dec> :)
[23:54] <joshd> indeed, ceph --help could use some help
[23:54] * PerlStalker (~PerlStalk@perlstalker-1-pt.tunnel.tserv8.dal1.ipv6.he.net) Quit (Quit: ...)
[23:57] * jmlowe (~Adium@99.111.157.233) Quit (Quit: Leaving.)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.