#ceph IRC Log

Index

IRC Log for 2013-08-01

Timestamps are in GMT/BST.

[0:51] -magnet.oftc.net- *** Looking up your hostname...
[0:51] -magnet.oftc.net- *** Checking Ident
[0:51] -magnet.oftc.net- *** Couldn't look up your hostname
[0:51] -magnet.oftc.net- *** No Ident response
[0:51] * CephLogBot (~PircBot@92.63.168.213) has joined #ceph
[0:51] * Topic is 'Latest stable (v0.61.7 "Cuttlefish") -- http://ceph.com/get || Ceph Developer Summit: Emperor - http://goo.gl/yy2Jh || Ceph Day NYC 01AUG2013 - http://goo.gl/TMIrZ'
[0:51] * Set by scuttlemonkey!~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net on Tue Jul 30 18:46:49 CEST 2013
[0:52] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Remote host closed the connection)
[0:52] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[0:52] * rudolfsteiner (~federicon@200.68.116.185) Quit (Quit: rudolfsteiner)
[0:52] * grepory1 (~Adium@50-115-70-146.static-ip.telepacific.net) Quit ()
[0:53] * lautriv (~lautriv@f050085055.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[0:55] <loicd> just
[0:58] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) has joined #ceph
[0:58] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) Quit ()
[1:00] * scuttlemonkey (~scuttlemo@216.194.44.151) has joined #ceph
[1:00] * ChanServ sets mode +o scuttlemonkey
[1:01] * mschiff (~mschiff@85.182.236.82) Quit (Ping timeout: 480 seconds)
[1:02] * lautriv (~lautriv@f050082253.adsl.alicedsl.de) has joined #ceph
[1:03] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[1:06] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:14] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Quit: Leaving.)
[1:15] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[1:23] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) has joined #ceph
[1:35] * yehudasa__ (~yehudasa@2607:f298:a:607:ea03:9aff:fe98:e8ff) Quit (Ping timeout: 480 seconds)
[1:39] * haomaiwang (~haomaiwan@notes4.com) has joined #ceph
[1:40] * haomaiwa_ (~haomaiwan@117.79.232.207) Quit (Read error: Connection reset by peer)
[1:57] * infernix (nix@5ED33947.cm-7-4a.dynamic.ziggo.nl) Quit (Ping timeout: 480 seconds)
[2:02] * yanzheng (~zhyan@jfdmzpr06-ext.jf.intel.com) Quit (Remote host closed the connection)
[2:05] * smiley (~smiley@ip-64-134-47-155.public.wayport.net) has joined #ceph
[2:11] * AfC (~andrew@gateway.syd.operationaldynamics.com) Quit (Quit: Leaving.)
[2:15] * smiley_ (~smiley@ip-64-134-47-155.public.wayport.net) has joined #ceph
[2:16] * huangjun (~kvirc@59.175.37.57) has joined #ceph
[2:18] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[2:19] * smiley (~smiley@ip-64-134-47-155.public.wayport.net) Quit (Ping timeout: 480 seconds)
[2:19] * smiley_ is now known as smiley
[2:20] * LeaChim (~LeaChim@2.122.178.96) Quit (Ping timeout: 480 seconds)
[2:27] * smiley (~smiley@ip-64-134-47-155.public.wayport.net) Quit (Quit: smiley)
[2:30] * scuttlemonkey (~scuttlemo@216.194.44.151) Quit (Ping timeout: 480 seconds)
[2:35] * xmltok (~xmltok@pool101.bizrate.com) Quit (Quit: Bye!)
[2:45] * smiley (~smiley@ip-64-134-47-155.public.wayport.net) has joined #ceph
[2:48] * yanzheng (~zhyan@134.134.139.72) has joined #ceph
[2:58] * julian (~julianwa@125.70.133.36) has joined #ceph
[3:08] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) Quit (Quit: Leaving.)
[3:08] * yy-nm (~chatzilla@218.74.35.76) has joined #ceph
[3:10] * dontalton (~don@128-107-239-234.cisco.com) Quit (Read error: Connection reset by peer)
[3:10] * jluis (~JL@89.181.148.68) Quit (Quit: Leaving)
[3:11] * Cube (~Cube@173.245.93.184) has joined #ceph
[3:12] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:15] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[3:16] * scuttlemonkey (~scuttlemo@216.194.44.151) has joined #ceph
[3:16] * ChanServ sets mode +o scuttlemonkey
[3:17] * nhm (~nhm@216.194.44.151) has joined #ceph
[3:21] <mtanski> I'm looking forward to the the ceph event tomorrow
[3:21] <mtanski> Esp the parts about the roadmap for cephfs
[3:27] * diegows (~diegows@190.190.2.126) Quit (Ping timeout: 480 seconds)
[3:28] * smiley (~smiley@ip-64-134-47-155.public.wayport.net) Quit (Quit: smiley)
[3:33] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:40] <MACscr> hmm, so in order to properly do ceph, what is the minimum amount of servers i should use? 4? 3 Storage Nodes and a MetaData/management server or what? What happens if the management server goes down?
[3:40] <MACscr> my apologize for the maybe ignorant question. I havent dove to deeply into ceph yet
[3:43] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:44] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[3:48] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[3:48] * rongze (~quassel@117.79.232.184) has joined #ceph
[3:49] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:49] <cfreak201> MACscr: you'll need 2 storage nodes atleast and 3 mons so you never loose quorum
[3:50] <cfreak201> MACscr: that is if you run one mon on each osd server and have an extra server which mon and optionally mhd + other stuff
[3:52] <MACscr> so were talking 5 servers?
[3:55] <cfreak201> MACscr: you can run mon on the same hardware was the osd's - no idea how that will perform but possible
[3:56] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (Remote host closed the connection)
[4:01] <cfreak201> Can anyone tell me if copy-on-write for new vms ( image -> volume i suppose?) is supported with grizzly and newer? Whenever I create a new VM it tries to copy the file to the compute node which has 2GB filesystem... (thought to be RO in the end) I've enabled the show_image_direct_url Flag.. digging through the python files I see that atleast glance is always copying up to 2013.2b2
[4:05] <joshd> cfreak201: do you have glance_api_version=2 in cinder.conf?
[4:05] <cfreak201> joshd: yes
[4:06] <joshd> I'd double-check the glance and cinder config files then - make sure everything is in the [DEFAULT] section for glance
[4:07] <MACscr> ah, MDS isnt even used with Ceph Block Devices or Object Storage, so i wouldnt even need that
[4:10] <joshd> cfreak201: oh, are you not booting from a volume?
[4:10] <joshd> cfreak201: grizzly won't convert an image to a volume automatically on boot, but havana should
[4:10] <cfreak201> joshd: havana is 2013.2 ?
[4:11] <joshd> cfreak201: you can create a volume, snapshot it, and then boot new vms from the volume snapshot (nova knows how to do that already)
[4:11] <joshd> cfreak201: not merged yet
[4:12] <joshd> cfreak201: havana is out in the fall
[4:12] <cfreak201> :/ atleast in cinder i found some code that has been merged to master... looks promising..
[4:12] <joshd> cfreak201: you can create a cow volume from an image
[4:12] <joshd> cfreak201: nova just won't do it for you on boot yet
[4:12] <cfreak201> ok
[4:13] <cfreak201> i really hope they are on time with havana.. lots of stuff that I would already need ..
[4:20] <MACscr> hmm, I can definitely do 3 Storage Nodes, but adding 2 to 3 more nodes just for Ceph Monitoring seems overkill. I only have a total of 10 compute nodes for OpenStack and they are all diskless.
[4:20] * off_rhoden (~anonymous@pool-173-79-66-35.washdc.fios.verizon.net) Quit (Quit: off_rhoden)
[4:25] <MACscr> I wonder if i could put the an openstack controller and a ceph-mon node on the same system. That would give me two of those each. Then do two osd nodes for storage.
[4:25] <MACscr> er, nvm
[4:27] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:30] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) Quit (Ping timeout: 480 seconds)
[4:41] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) has joined #ceph
[4:46] * nhm (~nhm@216.194.44.151) Quit (Ping timeout: 480 seconds)
[4:48] * Cube (~Cube@173.245.93.184) Quit (Quit: Leaving.)
[4:50] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[4:53] * AfC (~andrew@2001:44b8:31cb:d400:a4c3:3afb:f0e:4fef) has joined #ceph
[4:55] * Yen (~Yen@2a00:f10:103:201:ba27:ebff:fefb:350a) Quit (Quit: Exit.)
[5:05] * fireD (~fireD@93-142-252-173.adsl.net.t-com.hr) has joined #ceph
[5:06] * scuttlemonkey (~scuttlemo@216.194.44.151) Quit (Ping timeout: 480 seconds)
[5:07] * fireD_ (~fireD@93-142-245-150.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:17] * rongze (~quassel@117.79.232.184) Quit (Ping timeout: 480 seconds)
[5:20] * rongze (~quassel@117.79.232.202) has joined #ceph
[5:39] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) has joined #ceph
[5:56] * wschulze (~wschulze@cpe-69-203-80-81.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:16] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (Quit: Leaving.)
[6:18] * codice (~toodles@75-140-71-24.dhcp.lnbh.ca.charter.com) Quit (Remote host closed the connection)
[6:18] * codice (~toodles@75-140-71-24.dhcp.lnbh.ca.charter.com) has joined #ceph
[6:33] * xmltok (~xmltok@relay.els4.ticketmaster.com) has joined #ceph
[6:54] * rongze (~quassel@117.79.232.202) Quit (Ping timeout: 480 seconds)
[7:09] * DarkAce-Z (~BillyMays@50.107.55.36) has joined #ceph
[7:12] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[7:13] * Cube (~Cube@173.245.93.184) has joined #ceph
[7:17] <yy-nm> hello, all. i have a question about create multi-cluster in same hardware by using ceph-deploy
[7:17] <yy-nm> it means multi-cluster using same osd's disk??
[7:18] <huangjun> yy-nm: osd data dir default are /var/lib/ceph/osd/
[7:19] <huangjun> so if you have a cluster named ceph1 is will have the dir like ceph1-1,which means that the osd.1 in cluster ceph1
[7:20] <yy-nm> you mean is just using same server machine?
[7:22] <huangjun> i think so, what your result?
[7:24] <yy-nm> sorry, i don't have resource to prove it.
[7:27] <yy-nm> i just begin to use ceph-deploy utility
[7:34] <dmick> yy-nm: the bones of multi-cluster support are there, but I think there are still problems to be solved
[7:34] <dmick> it's a bit untested at least
[7:38] <yy-nm> dmick, you mean i also create multi-cluster using mkcephfs ??
[7:41] <dmick> I....said nothing about mkcephfs, no
[7:42] <yy-nm> ok, i misunderstand.
[7:56] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) Quit (Ping timeout: 480 seconds)
[7:57] * xmltok (~xmltok@relay.els4.ticketmaster.com) Quit (Quit: Leaving...)
[8:01] * mschiff (~mschiff@85.182.236.82) has joined #ceph
[8:10] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[8:12] * sleinen1 (~Adium@2001:620:0:25:4468:ceb:e1bc:e38b) has joined #ceph
[8:13] * huangjun (~kvirc@59.175.37.57) Quit (Ping timeout: 480 seconds)
[8:13] * mschiff (~mschiff@85.182.236.82) Quit (Remote host closed the connection)
[8:15] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[8:18] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:21] * rongze (~quassel@117.79.232.234) has joined #ceph
[8:32] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[8:33] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) has joined #ceph
[8:33] * xmltok (~xmltok@cpe-76-170-26-114.socal.res.rr.com) Quit ()
[8:38] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[8:44] * sleinen1 (~Adium@2001:620:0:25:4468:ceb:e1bc:e38b) Quit (Quit: Leaving.)
[8:47] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has left #ceph
[8:48] * saabylaptop (~saabylapt@1009ds5-oebr.1.fullrate.dk) has joined #ceph
[8:57] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:06] * mschiff (~mschiff@pD9511FE4.dip0.t-ipconnect.de) has joined #ceph
[9:07] * saabylaptop (~saabylapt@1009ds5-oebr.1.fullrate.dk) Quit (Quit: Leaving.)
[9:08] * jjgalvez1 (~jjgalvez@ip72-193-215-88.lv.lv.cox.net) Quit (Read error: Connection reset by peer)
[9:09] * jjgalvez (~jjgalvez@ip72-193-215-88.lv.lv.cox.net) has joined #ceph
[9:10] * Cube (~Cube@173.245.93.184) Quit (Quit: Leaving.)
[9:19] * bergerx_ (~bekir@78.188.101.175) has joined #ceph
[9:29] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) has joined #ceph
[9:36] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[9:40] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[9:42] * Cube (~Cube@173.245.93.184) has joined #ceph
[9:47] * dobber (~dobber@213.169.45.222) has joined #ceph
[9:48] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[9:48] * sleinen (~Adium@2001:620:0:26:209f:4573:cc16:1878) has joined #ceph
[9:56] * mynameisbruce (~mynameisb@tjure.netzquadrat.de) has joined #ceph
[10:08] * mynameisbruce (~mynameisb@tjure.netzquadrat.de) Quit (Quit: Bye)
[10:15] * mynameisbruce (~mynameisb@tjure.netzquadrat.de) has joined #ceph
[10:21] * mschiff_ (~mschiff@pD9511FE4.dip0.t-ipconnect.de) has joined #ceph
[10:21] * mschiff (~mschiff@pD9511FE4.dip0.t-ipconnect.de) Quit (Read error: Connection reset by peer)
[10:25] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[10:26] * lautriv (~lautriv@f050082253.adsl.alicedsl.de) Quit (Remote host closed the connection)
[10:29] * LeaChim (~LeaChim@2.122.178.96) has joined #ceph
[10:35] * Cube (~Cube@173.245.93.184) Quit (Quit: Leaving.)
[10:41] * TMM (~hp@535240C7.cm-6-3b.dynamic.ziggo.nl) Quit (Remote host closed the connection)
[10:41] * TMM (~hp@535240C7.cm-6-3b.dynamic.ziggo.nl) has joined #ceph
[10:44] * leseb_ is now known as leseb
[11:00] * jjgalvez (~jjgalvez@ip72-193-215-88.lv.lv.cox.net) Quit (Quit: Leaving.)
[11:01] * yanzheng (~zhyan@134.134.139.72) Quit (Remote host closed the connection)
[11:09] * masterpe (~masterpe@2a01:670:400::43) Quit (Ping timeout: 480 seconds)
[11:30] * masterpe (~masterpe@8-3-159-88.lab.edutel.nl) has joined #ceph
[11:30] * masterpe (~masterpe@8-3-159-88.lab.edutel.nl) Quit (Read error: Connection reset by peer)
[11:35] * masterpe (~masterpe@2a01:670:400::43) has joined #ceph
[11:46] * yy-nm (~chatzilla@218.74.35.76) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 22.0/20130618035212])
[11:50] * bwesemann (~bwesemann@2001:1b30:0:6:bc92:1101:ba5e:bd27) Quit (Remote host closed the connection)
[11:50] * bwesemann (~bwesemann@2001:1b30:0:6:c829:66ef:93a9:b43) has joined #ceph
[12:08] * ScOut3R (~ScOut3R@catv-89-133-25-52.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:15] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[12:24] * nhm (~nhm@216.194.44.151) has joined #ceph
[12:28] * stxShadow (~Jens@ip-88-152-161-249.unitymediagroup.de) has joined #ceph
[12:36] * forgery (~oftc-webi@gw.vpn.autistici.org) has joined #ceph
[12:36] <forgery> hello
[12:39] <forgery> i've a dumb question: i want to know how ceph interact with physical disk (/dev/sdXX), is there some specs doc? ie: if i've one system with 1 disk and 2 partition (sda1, sda2) formatted with ext4, i install ceph on it and configure an OSD over these two partition?
[12:42] * jluis (~joao@89.181.144.108) has joined #ceph
[12:47] * joao (~joao@89.181.148.68) Quit (Ping timeout: 480 seconds)
[12:48] * sleinen (~Adium@2001:620:0:26:209f:4573:cc16:1878) Quit (Quit: Leaving.)
[12:49] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[12:52] <joelio> forgery: running an osd on same disk as os is really not recommended
[12:52] <joelio> This is how Ceph fundementally works - http://ceph.com/docs/next/architecture/
[12:53] <joelio> Hardware recommenations - http://ceph.com/docs/next/install/hardware-recommendations/
[13:03] * s2r2 (uid322@id-322.ealing.irccloud.com) has joined #ceph
[13:05] * rudolfsteiner (~federicon@141-77-235-201.fibertel.com.ar) has joined #ceph
[13:06] * rudolfsteiner (~federicon@141-77-235-201.fibertel.com.ar) Quit ()
[13:11] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:16] * stxShadow (~Jens@ip-88-152-161-249.unitymediagroup.de) Quit (Read error: Connection reset by peer)
[13:22] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:26] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[13:29] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[13:35] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[13:35] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[13:38] * dobber (~dobber@213.169.45.222) Quit (Remote host closed the connection)
[13:49] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[13:55] * saabylaptop (~saabylapt@1009ds5-oebr.1.fullrate.dk) has joined #ceph
[13:59] * joao (~JL@89.181.144.108) has joined #ceph
[13:59] * ChanServ sets mode +o joao
[13:59] * nhm (~nhm@216.194.44.151) Quit (Quit: Lost terminal)
[14:03] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Remote host closed the connection)
[14:04] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[14:11] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) Quit (Quit: Leaving.)
[14:11] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[14:12] * mtk (~mtk@68.195.89.131) has joined #ceph
[14:19] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[14:21] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[14:26] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Remote host closed the connection)
[14:27] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[14:28] <n1md4> hi. i've followed the install guide, and have created a few osd's.
[14:29] <n1md4> i'm used to lvm or drbd, where there's something i can check the status of. what are the tools in ceph to check health?
[14:29] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[14:29] <n1md4> ..health, capacity, drives, whatever ..
[14:30] * jefferai (~quassel@corkblock.jefferai.org) Quit (Quit: No Ping reply in 180 seconds.)
[14:31] * jefferai (~quassel@corkblock.jefferai.org) has joined #ceph
[14:31] <n1md4> found it http://ceph.com/docs/master/rados/operations/monitoring/
[14:31] <n1md4> :)
[14:32] * saabylaptop (~saabylapt@1009ds5-oebr.1.fullrate.dk) Quit (Quit: Leaving.)
[14:32] <forgery> drbd under ceph? why?
[14:36] * saabylaptop (~saabylapt@1009ds5-oebr.1.fullrate.dk) has joined #ceph
[14:41] <MACscr> forgery: he was saying what he was used to, not saying he uses it with ceph
[14:42] <forgery> ah :)
[14:42] <forgery> i just read it so fast :p
[14:45] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[14:46] <n1md4> do have another question. http://pastie.org/pastes/8196602/text is it better to have entire drives assigned to OSDs? Is there a way to force an osd?
[14:49] <mikedawson> n1md4: one physical drive per OSD without any hardware/software RAID is a typical setup
[14:50] <n1md4> mikedawson: thanks. for the sake of my current build, is it possible to force just a parition?
[14:52] <mikedawson> n1md4: I haven't used partitions or ceph-deploy, so I'll let someone else comment
[14:55] <n1md4> the guide suggests you can, without forcing, but from my pastie above, it looks like it does like the fact that sda1 has md0 on it .. not sure why that would be a problem though.
[14:56] <forgery> sda1 is part of raid or so?
[15:01] * saabylaptop (~saabylapt@1009ds5-oebr.1.fullrate.dk) Quit (Quit: Leaving.)
[15:02] <n1md4> forgery: yes.
[15:03] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[15:03] <n1md4> i have 4 drives, all identically partitioned. sda1 10g md0 raid1 (2 spares) /, sda2 4g md1 raid6 swap, sda3 empty
[15:04] <forgery> so sda1 is a part of raid...
[15:05] <forgery> why do you want to break the raid? (if i understand)
[15:05] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit ()
[15:05] <n1md4> I don't. the third partition on each disk is empty
[15:06] <n1md4> as you can see for the pastie.org link above, i've stipulated sda:/dev/sda3, but the returned message says about sda1 and md0 .
[15:08] <forgery> yes, i see, i don't know why. sorry
[15:16] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (Remote host closed the connection)
[15:16] * rongze (~quassel@754fe8ea.test.dnsbl.oftc.net) Quit (Read error: Connection reset by peer)
[15:18] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * masterpe (~masterpe@2a01:670:400::43) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * gregaf (~Adium@2607:f298:a:607:112c:1fa8:77e1:af2e) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * guppy (~quassel@guppy.xxx) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * AaronSchulz (~chatzilla@192.195.83.36) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * Azrael (~azrael@terra.negativeblue.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * jeroenmoors (~quassel@193.104.8.40) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * jnq (~jon@0001b7cc.user.oftc.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * lmb (lmb@212.8.204.10) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * nigwil (~idontknow@174.143.209.84) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * Jakdaw (~chris@puma-mxisp.mxtelecom.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * jf-jenni (~jf-jenni@stallman.cse.ohio-state.edu) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * Kdecherf (~kdecherf@shaolan.kdecherf.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * maswan (maswan@kennedy.acc.umu.se) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * sbadia (~sbadia@yasaw.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * soren (~soren@hydrogen.linux2go.dk) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * chutz (~chutz@rygel.linuxfreak.ca) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * yeled (~yeled@spodder.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * LeaChim (~LeaChim@2.122.178.96) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * mynameisbruce (~mynameisb@tjure.netzquadrat.de) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * haomaiwang (~haomaiwan@notes4.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * BillK (~BillK-OFT@124-168-243-244.dyn.iinet.net.au) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * Daviey (~DavieyOFT@bootie.daviey.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * cfreak201 (~cfreak200@p4FF3E75F.dip0.t-ipconnect.de) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * jochen (~jochen@laevar.de) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * terje_ (~joey@97-118-115-214.hlrn.qwest.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * NaioN_ (stefan@andor.naion.nl) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * tdb (~tdb@willow.kent.ac.uk) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * Fetch_ (fetch@gimel.cepheid.org) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * Tamil (~tamil@38.122.20.226) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * dmick (~dmick@38.122.20.226) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * davidz (~Adium@ip68-5-239-214.oc.oc.cox.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * \ask (~ask@oz.develooper.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * Psi-Jack_ (~Psi-Jack@yggdrasil.hostdruids.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * jks (~jks@3e6b5724.rev.stofanet.dk) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * bergerx_ (~bekir@78.188.101.175) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * SvenPHX (~scarter@wsip-174-79-34-244.ph.ph.cox.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * iggy (~iggy@theiggy.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * markl (~mark@tpsit.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * terje-_ (~root@135.109.216.239) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * janisg (~troll@85.254.50.23) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * Ormod (~valtha@ohmu.fi) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * nwf (~nwf@67.62.51.95) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * nwl (~levine@atticus.yoyo.org) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * baffle_ (baffle@jump.stenstad.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * alexbligh (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * _robbat2|irssi (nobody@www2.orbis-terrarum.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * tchmnkyz (~jeremy@0001638b.user.oftc.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * joshd (~joshd@38.122.20.226) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * sjust (~sam@38.122.20.226) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * saml (~sam@adfb12c6.cst.lightpath.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * cjh_ (~cjh@ps123903.dreamhost.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * josef (~seven@li70-116.members.linode.com) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * liiwi (liiwi@idle.fi) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * [cave] (~quassel@boxacle.net) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * [fred] (fred@konfuzi.us) Quit (charon.oftc.net synthon.oftc.net)
[15:18] * Sargun_ (~sargun@208-106-98-2.static.sonic.net) Quit (charon.oftc.net synthon.oftc.net)
[15:22] * [cave] (~quassel@boxacle.net) has joined #ceph
[15:22] * Ormod (~valtha@ohmu.fi) has joined #ceph
[15:22] * nwf (~nwf@67.62.51.95) has joined #ceph
[15:22] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[15:22] * Sargun_ (~sargun@208-106-98-2.static.sonic.net) has joined #ceph
[15:22] * liiwi (liiwi@idle.fi) has joined #ceph
[15:22] * baffle_ (baffle@jump.stenstad.net) has joined #ceph
[15:22] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[15:22] * [fred] (fred@konfuzi.us) has joined #ceph
[15:22] * alexbligh (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[15:22] * _robbat2|irssi (nobody@www2.orbis-terrarum.net) has joined #ceph
[15:22] * tchmnkyz (~jeremy@0001638b.user.oftc.net) has joined #ceph
[15:22] * joshd (~joshd@38.122.20.226) has joined #ceph
[15:22] * sjust (~sam@38.122.20.226) has joined #ceph
[15:22] * josef (~seven@li70-116.members.linode.com) has joined #ceph
[15:22] * mjeanson (~mjeanson@00012705.user.oftc.net) has joined #ceph
[15:22] * saml (~sam@adfb12c6.cst.lightpath.net) has joined #ceph
[15:22] * cjh_ (~cjh@ps123903.dreamhost.com) has joined #ceph
[15:22] * janisg (~troll@85.254.50.23) has joined #ceph
[15:22] * terje-_ (~root@135.109.216.239) has joined #ceph
[15:22] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) has joined #ceph
[15:22] * markl (~mark@tpsit.com) has joined #ceph
[15:22] * iggy (~iggy@theiggy.com) has joined #ceph
[15:22] * SvenPHX (~scarter@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[15:22] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) has joined #ceph
[15:22] * bergerx_ (~bekir@78.188.101.175) has joined #ceph
[15:22] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[15:22] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[15:22] * masterpe (~masterpe@2a01:670:400::43) has joined #ceph
[15:22] * LeaChim (~LeaChim@2.122.178.96) has joined #ceph
[15:22] * mynameisbruce (~mynameisb@tjure.netzquadrat.de) has joined #ceph
[15:22] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[15:22] * infernix (nix@cl-1404.ams-04.nl.sixxs.net) has joined #ceph
[15:22] * haomaiwang (~haomaiwan@notes4.com) has joined #ceph
[15:22] * BillK (~BillK-OFT@124-168-243-244.dyn.iinet.net.au) has joined #ceph
[15:22] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) has joined #ceph
[15:22] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[15:22] * gregaf (~Adium@2607:f298:a:607:112c:1fa8:77e1:af2e) has joined #ceph
[15:22] * KindOne (KindOne@0001a7db.user.oftc.net) has joined #ceph
[15:22] * cfreak201 (~cfreak200@p4FF3E75F.dip0.t-ipconnect.de) has joined #ceph
[15:22] * guppy (~quassel@guppy.xxx) has joined #ceph
[15:22] * AaronSchulz (~chatzilla@192.195.83.36) has joined #ceph
[15:22] * Daviey (~DavieyOFT@bootie.daviey.com) has joined #ceph
[15:22] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[15:22] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[15:22] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[15:22] * davidz (~Adium@ip68-5-239-214.oc.oc.cox.net) has joined #ceph
[15:22] * dmick (~dmick@38.122.20.226) has joined #ceph
[15:22] * Psi-Jack_ (~Psi-Jack@yggdrasil.hostdruids.com) has joined #ceph
[15:22] * Tamil (~tamil@38.122.20.226) has joined #ceph
[15:22] * yeled (~yeled@spodder.com) has joined #ceph
[15:22] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[15:22] * Kdecherf (~kdecherf@shaolan.kdecherf.com) has joined #ceph
[15:22] * beardo (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[15:22] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[15:22] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[15:22] * jf-jenni (~jf-jenni@stallman.cse.ohio-state.edu) has joined #ceph
[15:22] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[15:22] * Jakdaw (~chris@puma-mxisp.mxtelecom.com) has joined #ceph
[15:22] * nigwil (~idontknow@174.143.209.84) has joined #ceph
[15:22] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[15:22] * lmb (lmb@212.8.204.10) has joined #ceph
[15:22] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[15:22] * sbadia (~sbadia@yasaw.net) has joined #ceph
[15:22] * maswan (maswan@kennedy.acc.umu.se) has joined #ceph
[15:22] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[15:22] * jnq (~jon@0001b7cc.user.oftc.net) has joined #ceph
[15:22] * jeroenmoors (~quassel@193.104.8.40) has joined #ceph
[15:22] * soren (~soren@hydrogen.linux2go.dk) has joined #ceph
[15:22] * Fetch_ (fetch@gimel.cepheid.org) has joined #ceph
[15:22] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) has joined #ceph
[15:22] * tdb (~tdb@willow.kent.ac.uk) has joined #ceph
[15:22] * NaioN_ (stefan@andor.naion.nl) has joined #ceph
[15:22] * terje_ (~joey@97-118-115-214.hlrn.qwest.net) has joined #ceph
[15:22] * jochen (~jochen@laevar.de) has joined #ceph
[15:22] * \ask (~ask@oz.develooper.com) has joined #ceph
[15:22] <MACscr> maybe a dumb question, but what do you guys think of doing a single storage node for ceph to start out and then adding more nodes later? I can probably swing a second node, but from my reading, it seems like the best options are 1 or 3
[15:22] * ChanServ sets mode +v elder
[15:22] * ChanServ sets mode +o dmick
[15:22] * rongze (~quassel@754fe8ea.test.dnsbl.oftc.net) has joined #ceph
[15:23] * mtanski (~mtanski@177.sub-70-208-75.myvzw.com) has joined #ceph
[15:27] * AfC1 (~andrew@2001:44b8:31cb:d400:9df4:2ccc:9268:e469) has joined #ceph
[15:29] * AfC (~andrew@2001:44b8:31cb:d400:a4c3:3afb:f0e:4fef) Quit (Ping timeout: 480 seconds)
[15:29] * huangjun (~kvirc@106.120.176.62) has joined #ceph
[15:33] * AfC1 (~andrew@2001:44b8:31cb:d400:9df4:2ccc:9268:e469) Quit (Quit: Leaving.)
[15:33] <agh> Hello, where can i find some info about future multi-region radosgw feature ?
[15:33] <agh> Will next radogw embed a clustering system ?
[15:38] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Quit: No Ping reply in 180 seconds.)
[15:38] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[15:38] * haomaiwang (~haomaiwan@notes4.com) Quit (Read error: Connection reset by peer)
[15:38] * haomaiwang (~haomaiwan@notes4.com) has joined #ceph
[15:40] * odyssey4me (~odyssey4m@165.233.71.2) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * n1md4 (~nimda@anion.cinosure.com) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * iggy_ (~iggy@theiggy.com) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * _Tassadar (~tassadar@tassadar.xs4all.nl) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * lupine (~lupine@lupine.me.uk) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * Pauline (~middelink@2001:838:3c1:1:be5f:f4ff:fe58:e04) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * niklas (~niklas@2001:7c0:409:8001::32:115) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * Gugge-47527 (gugge@kriminel.dk) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * Ludo__ (~Ludo@falbala.zoxx.net) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * ggreg_ (~ggreg@int.0x80.net) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * cce (~cce@50.56.54.167) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * rtek (~sjaak@rxj.nl) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * phantomcircuit (~phantomci@covertinferno.org) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * denken (~denken@dione.pixelchaos.net) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * seif (uid11725@ealing.irccloud.com) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * jerker (jerker@Psilocybe.Update.UU.SE) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:40] * asadpanda (~asadpanda@2001:470:c09d:0:20c:29ff:fe4e:a66) Quit (reticulum.oftc.net solenoid.oftc.net)
[15:42] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[15:42] * n1md4 (~nimda@anion.cinosure.com) has joined #ceph
[15:42] * iggy_ (~iggy@theiggy.com) has joined #ceph
[15:42] * _Tassadar (~tassadar@tassadar.xs4all.nl) has joined #ceph
[15:42] * lupine (~lupine@lupine.me.uk) has joined #ceph
[15:42] * Pauline (~middelink@2001:838:3c1:1:be5f:f4ff:fe58:e04) has joined #ceph
[15:42] * niklas (~niklas@2001:7c0:409:8001::32:115) has joined #ceph
[15:42] * Gugge-47527 (gugge@kriminel.dk) has joined #ceph
[15:42] * Ludo__ (~Ludo@falbala.zoxx.net) has joined #ceph
[15:42] * ggreg_ (~ggreg@int.0x80.net) has joined #ceph
[15:42] * cce (~cce@50.56.54.167) has joined #ceph
[15:42] * rtek (~sjaak@rxj.nl) has joined #ceph
[15:42] * phantomcircuit (~phantomci@covertinferno.org) has joined #ceph
[15:42] * denken (~denken@dione.pixelchaos.net) has joined #ceph
[15:42] * seif (uid11725@ealing.irccloud.com) has joined #ceph
[15:42] * jerker (jerker@Psilocybe.Update.UU.SE) has joined #ceph
[15:42] * asadpanda (~asadpanda@2001:470:c09d:0:20c:29ff:fe4e:a66) has joined #ceph
[15:43] <lupine> ...by the NSA
[15:44] * ChanServ sets mode +v dmick
[15:44] * ChanServ changes topic to 'Latest stable (v0.61.7 "Cuttlefish") -- http://ceph.com/get || Ceph Developer Summit: Emperor - http://goo.gl/yy2Jh || Ceph Day NYC 01AUG2013 - http://goo.gl/TMIrZ'
[15:44] * ChanServ sets mode +v joao
[15:44] <phantomcircuit> wat
[15:47] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[15:50] * mtanski (~mtanski@177.sub-70-208-75.myvzw.com) Quit (Quit: mtanski)
[15:53] * aliguori (~anthony@32.97.110.51) has joined #ceph
[16:00] <n1md4> MACscr: there is mention of it in the setup guide, not sure how scalable it would be; I'm only beginning with Ceph myself.
[16:04] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[16:05] <forgery> I'm reading this:
[16:05] <forgery> Tip Running multiple OSDs on a single disk–irrespective of partitions–is NOT a good idea. Tip Running an OSD and a monitor or a metadata server on a single disk–irrespective of partitions–is NOT a good idea either.
[16:06] <forgery> i can imagine that you can do it, but it's not a good idea.
[16:07] <phantomcircuit> forgery, multiple osds on a single disk would cause horrible thrashing
[16:08] <phantomcircuit> as the disk seeked all over the place
[16:11] * mbjorling (~SilverWol@130.226.133.120) has joined #ceph
[16:13] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:23] <forgery> the data between osds are encrypted on network?
[16:23] <forgery> i don't see this option in the conf
[16:24] <MACscr> forgery: why would they be?
[16:26] * Kioob (~kioob@2a01:e35:2432:58a0:21e:8cff:fe07:45b6) Quit (Quit: Leaving.)
[16:26] <lupine> the utility is obvious if it's over a public network
[16:26] <lupine> but it's not the sanest use case ever
[16:27] <forgery> i know that use ceph on a geo cluster is bad, but it's on a 100mb fiber
[16:27] <forgery> the 2 server are connected over vpn now
[16:27] <MACscr> xtreemfs is better for that type of thing
[16:27] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Quit: No Ping reply in 180 seconds.)
[16:28] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[16:28] * AfC (~andrew@2001:44b8:31cb:d400:9df4:2ccc:9268:e469) has joined #ceph
[16:28] <forgery> in the past i've used iscsi only over vpn
[16:28] <MACscr> iscsi over vpn? lol
[16:29] <forgery> why not?
[16:29] * AfC (~andrew@2001:44b8:31cb:d400:9df4:2ccc:9268:e469) Quit ()
[16:29] <MACscr> because iscsi and latency do not work well together and iscsi is not meant to be used over wan
[16:30] <forgery> that is why i'm usign ceph now..
[16:30] <MACscr> but its not made for that purpose either
[16:30] <forgery> but ceph too is not a good idea over wan
[16:30] * AfC (~andrew@2001:44b8:31cb:d400:9df4:2ccc:9268:e469) has joined #ceph
[16:30] <forgery> yea, i see
[16:30] <MACscr> http://www.xtreemfs.org/
[16:31] <forgery> i'm seeing it
[16:31] <darkfaded> i once formatted some netapp luns in my laptop as raid5 with windows over 1/0.256mbit dsl
[16:31] <darkfaded> didn't really perform well
[16:31] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:31] <MACscr> ha, nice one
[16:31] <darkfaded> and the others suddenly had to work instead of websurfing
[16:33] * BillK (~BillK-OFT@124-168-243-244.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[16:34] * ChanServ changes topic to 'Latest stable (v0.61.7 "Cuttlefish") -- http://ceph.com/get || Ceph Developer Summit: Emperor - http://goo.gl/yy2Jh || Ceph Day NYC 01AUG2013 - http://goo.gl/TMIrZ'
[16:35] <loicd> sjust: reading https://github.com/athanatos/ceph/blob/wip-erasure-coding-doc/doc/dev/osd_internals/erasure_coding.rst made me realize that I failed to realize what role the logs have to ensure the consistency of a placement group.
[16:35] <loicd> s/realize/understand/
[16:36] * AfC (~andrew@2001:44b8:31cb:d400:9df4:2ccc:9268:e469) Quit (Quit: Leaving.)
[16:37] <loicd> I wrote enough unit tests on log merging to know they are involved. But when I thought about the erasure code implementation, I somehow thought the logs would not need to be modified.
[16:41] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * rongze (~quassel@754fe8ea.test.dnsbl.oftc.net) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * jluis (~joao@89.181.144.108) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * wrencsok1 (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * ShaunR (~ShaunR@staff.ndchost.com) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * maciek (maciek@0001bab6.user.oftc.net) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * scalability-junk (uid6422@ealing.irccloud.com) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * eternaleye (~eternaley@2002:3284:29cb::1) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * psomas (~psomas@inferno.cc.ece.ntua.gr) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * elmo (~james@faun.canonical.com) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * joelio (~Joel@88.198.107.214) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * nyerup (irc@jespernyerup.dk) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * Elbandi (~ea333@elbandi.net) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * nlopes (~nlopes@a89-154-18-198.cpe.netcabo.pt) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * mkoderer (uid11949@ealing.irccloud.com) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * sage (~sage@76.89.177.113) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * link0 (~dennisdeg@backend0.link0.net) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * hijacker (~hijacker@213.91.163.5) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * wogri (~wolf@nix.wogri.at) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * cephalobot (~ceph@ds2390.dreamservers.com) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:41] * tomaw (tom@tomaw.netop.oftc.net) Quit (reticulum.oftc.net coulomb.oftc.net)
[16:42] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[16:42] * rongze (~quassel@754fe8ea.test.dnsbl.oftc.net) has joined #ceph
[16:42] * jluis (~joao@89.181.144.108) has joined #ceph
[16:42] * wrencsok1 (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[16:42] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[16:42] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[16:42] * maciek (maciek@0001bab6.user.oftc.net) has joined #ceph
[16:42] * scalability-junk (uid6422@ealing.irccloud.com) has joined #ceph
[16:42] * elmo (~james@faun.canonical.com) has joined #ceph
[16:42] * eternaleye (~eternaley@2002:3284:29cb::1) has joined #ceph
[16:42] * psomas (~psomas@inferno.cc.ece.ntua.gr) has joined #ceph
[16:42] * joelio (~Joel@88.198.107.214) has joined #ceph
[16:42] * nyerup (irc@jespernyerup.dk) has joined #ceph
[16:42] * Elbandi (~ea333@elbandi.net) has joined #ceph
[16:42] * nlopes (~nlopes@a89-154-18-198.cpe.netcabo.pt) has joined #ceph
[16:42] * mkoderer (uid11949@ealing.irccloud.com) has joined #ceph
[16:42] * sage (~sage@76.89.177.113) has joined #ceph
[16:42] * link0 (~dennisdeg@backend0.link0.net) has joined #ceph
[16:42] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[16:42] * wogri (~wolf@nix.wogri.at) has joined #ceph
[16:42] * cephalobot (~ceph@ds2390.dreamservers.com) has joined #ceph
[16:42] * tomaw (tom@tomaw.netop.oftc.net) has joined #ceph
[16:42] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) Quit (Remote host closed the connection)
[16:42] * wido (~wido@2a00:f10:121:100:4a5:76ff:fe00:199) has joined #ceph
[16:50] <loicd> sjust: when you write "have persisted the deletion event" you mean persisted the delete pg log entry ?
[16:52] <loicd> persisted meaning the deletion pg log event is not only in memory but also on disk
[16:57] * julian (~julianwa@125.70.133.36) Quit (Quit: afk)
[16:59] * sprachgenerator (~sprachgen@130.202.135.194) has joined #ceph
[16:59] * mschiff_ (~mschiff@pD9511FE4.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[17:07] * loicd trying to figure out why rolling back CEPH_OSD_OP_DELETE is so complicated
[17:19] * diegows (~diegows@200.68.116.185) has joined #ceph
[17:24] * yehudasa__ (~yehudasa@2602:306:330b:1410:2420:498a:1917:b8f9) has joined #ceph
[17:26] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[17:26] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:30] <loicd> client sends delete => primary sends pg_log_entry_t DELETE to replicas => only one replica persists DELETE => OSDs go down => return EAGAIN to the client => OSDs recover and the DELETE that has been persisted in the logs is found to be divergent => the DELETE is rolled back and the object marked missing on the OSD where the delete was found to be divergent
[17:33] <loicd> that's how I understand the need to "undelete" / rollback a delete in a replicated pg
[17:34] * devoid (~devoid@130.202.135.210) has joined #ceph
[17:35] <loicd> after the object is marked missing, it will be recovered / copied from another OSD
[17:35] * loicd talking to himself mostly ;-)
[17:37] * loicd trying to understand exactly why the delete operation cannot just return to the client after marking the object as deleted in the authoritative logs ( the OSD primary logs )
[17:38] <loicd> if the primary goes down
[17:38] <loicd> permanently
[17:38] <loicd> another is elected and a new authoritative log is chosen
[17:38] * forgery (~oftc-webi@gw.vpn.autistici.org) Quit (Remote host closed the connection)
[17:39] <loicd> and if the primary OSD did not have a chance to persist the log entry on *any* other OSDs ... the object will suddenly resurect. That would not be good.
[17:39] <yanzheng> and the client request deleting the object also down
[17:41] <yanzheng> another client reads the object before the primary down, the primary osd return -ENOENT
[17:41] <loicd> so deletion can be acked to the client when there is no way the object can be resurected. Meaning when all replicas have acked the deletion in the case of a replicated pg. Or when enough chunks have been deleted and there is no way for erasure code to reconstruct from the remaining chunks.
[17:42] <loicd> yanzheng: yes :-) That starts to make sense.
[17:43] * loicd reparsing sjust sentence "CEPH_OSD_OP_DELETE: The possibility of rolling back a delete requires that we retain the deleted object until all replicas have persisted the deletion event. "
[17:43] <loicd> true for replicated pg indeed.
[17:43] <loicd> "ErasureCoded backend will therefore need to store objects with the version at which they were created included in the key provided to the filestore. Old versions of an object can be pruned when all replicas have committed up to the log event deleting the object."
[17:44] <loicd> still puzzles me
[17:47] * ishkabob (~c7a82cc0@webuser.thegrebs.com) has joined #ceph
[17:48] <ishkabob> hi again ceph devs :)
[17:48] <yanzheng> for the case object is deleted, then recreated?
[17:49] <loicd> yanzheng: interesting idea
[17:50] <ishkabob> i'm trying to deploy Ceph using puppet, and I'm having some trouble with keyrings (I think). I've created a keyring with the ceph-authtool and dropped in all the daemon directories as well as /etc/ceph. When I try to launch Ceph, I get this output - http://pastebin.com/azgKX1Gn
[17:51] * haomaiwang (~haomaiwan@notes4.com) Quit (Ping timeout: 480 seconds)
[18:00] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:01] * bergerx_ (~bekir@78.188.101.175) Quit (Quit: Leaving.)
[18:01] <cmdrk> blargh. i'm trying to remove ceph and reinstall, and i've cleared out /var/lib/ceph and /var/run/ceph on my machines. now i have a new Ceph going and I'm trying to add OSDs and it doesnt seem to be working
[18:02] <cmdrk> the mon is repeating a lot of "ignoring fsid" entries
[18:02] <cmdrk> did I miss something?
[18:03] * loicd contemplating https://github.com/ceph/ceph/blob/master/src/test/os/TestLFNIndex.cc#L128 to remember what object names are like when converted into file names in the object store
[18:03] * JM__ (~oftc-webi@193.252.138.241) Quit (Quit: Page closed)
[18:04] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[18:05] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[18:08] * mikedawson_ (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[18:13] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[18:13] * mikedawson_ is now known as mikedawson
[18:14] <loicd> cmdrk: you cleared /etc/ceph too & killed the daemons ?
[18:14] <loicd> ceph-deploy purge & purgedata do the right thing to clear a node
[18:15] <loicd> it's brutally efficient ;-)
[18:18] * rudolfsteiner (~federicon@190.220.6.50) has joined #ceph
[18:19] <devoid> is it CephFS or Ceph FS?
[18:24] * zackc (~zack@0001ba60.user.oftc.net) Quit (Quit: leaving)
[18:25] * zack_ (~zack@formosa.juno.dreamhost.com) has joined #ceph
[18:25] * zack_ is now known as zackc
[18:26] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has left #ceph
[18:31] <cmdrk> yeah ive been using mkcephfs :( time to learn ceph-deploy i suppose!
[18:31] * huangjun (~kvirc@106.120.176.62) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[18:31] <ishkabob> perhaps I can ask a different question. If I DON'T want to use ceph-deploy, what is the absolute minimum set of keys that I need to get Ceph running?
[18:32] <ishkabob> and how do I generate them
[18:32] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) has joined #ceph
[18:35] * devoid (~devoid@130.202.135.210) Quit (Ping timeout: 480 seconds)
[18:50] <alfredodeza> n1md4: ping
[18:51] <alfredodeza> n1md4: this is the problem you got into the other day: http://tracker.ceph.com/issues/5208
[18:52] <alfredodeza> I am currently working on fixing it, should be done soon(ish)
[18:52] <alfredodeza> just letting you know :)
[18:57] * dpippenger (~riven@cpe-76-166-208-83.socal.res.rr.com) Quit (Remote host closed the connection)
[18:58] * sjusthm (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[18:59] * sjustlaptop (~sam@24-205-35-233.dhcp.gldl.ca.charter.com) has joined #ceph
[19:03] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has joined #ceph
[19:11] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has left #ceph
[19:11] * rudolfsteiner (~federicon@190.220.6.50) Quit (Ping timeout: 480 seconds)
[19:13] * rudolfsteiner (~federicon@200.68.116.185) has joined #ceph
[19:14] <n1md4> alfredodeza: ... well, this might have been my problem, but the simple fix was to install ca-certificates
[19:14] <n1md4> ah, which now I read the bug, we came to the same end :)
[19:14] <alfredodeza> yep, the bug fix is to install that on debian wheezy
[19:15] <alfredodeza> right
[19:15] <n1md4> great, thanks.
[19:15] <alfredodeza> just wanted to let you know that *I* know now and it should be fixed soon
[19:15] <alfredodeza> :D
[19:15] <n1md4> hah
[19:16] <n1md4> i'm further now, and have 8 osd, across 2 nodes. but many pgs are active+degraded. should this been an indication that intervention is required, or is it fixing itself?
[19:16] * zackc (~zack@0001ba60.user.oftc.net) Quit (Quit: leaving)
[19:16] <n1md4> there were stale+active+degraded, but the stale pgs fixed themselves..
[19:17] <n1md4> (should have said, stale+active+clean)
[19:17] * zack_ (~zack@formosa.juno.dreamhost.com) has joined #ceph
[19:17] * zack_ is now known as zackc
[19:22] <mikedawson> joshd: ping
[19:25] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has left #ceph
[19:25] * diegows (~diegows@200.68.116.185) Quit (Ping timeout: 480 seconds)
[19:25] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[19:28] * rudolfsteiner (~federicon@200.68.116.185) Quit (Quit: rudolfsteiner)
[19:29] <gregmark> ishkabob: read this: http://ceph.com/docs/next/dev/mon-bootstrap/
[19:29] <ishkabob> thanks gremark, will do
[19:29] <ishkabob> gregmark
[19:33] <n1md4> any one about to check this and provide assistance http://pastie.org/pastes/8197346/text , I would really appreciate it.
[19:35] * lxo (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[19:35] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:35] <mikedawson> n1md4: http://ceph.com/docs/next/rados/operations/monitoring-osd-pg/#degraded
[19:37] * alram (~alram@cpe-76-167-50-51.socal.res.rr.com) has joined #ceph
[19:37] <n1md4> mikedawson: thanks.
[19:37] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[19:38] <mikedawson> n1md4: 'ceph pg dump | grep degraded' may show you something useful
[19:39] <n1md4> ha, actually shows me too much
[19:39] <n1md4> well, it shows me the 189 degraded pgs
[19:41] <mikedawson> n1md4: you may have an OSD (or more than one) that is misbehaving, you can where each Placement Group lives by examining the osds. it'll look something like [3,62,28]. Meaning osd.3, osd.62, and osd.28 where osd.3 is primary.
[19:43] * terje_ (~joey@97-118-115-214.hlrn.qwest.net) Quit (Ping timeout: 480 seconds)
[19:43] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) Quit (Ping timeout: 480 seconds)
[19:46] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) has joined #ceph
[19:49] * saml (~sam@adfb12c6.cst.lightpath.net) Quit (Quit: Leaving)
[19:50] * jjgalvez (~jjgalvez@ip72-193-217-254.lv.lv.cox.net) has joined #ceph
[19:50] * smiley (~smiley@rrcs-208-105-51-49.nyc.biz.rr.com) has joined #ceph
[19:51] * jjgalvez (~jjgalvez@ip72-193-217-254.lv.lv.cox.net) Quit (Read error: Connection reset by peer)
[19:51] * jjgalvez (~jjgalvez@ip72-193-217-254.lv.lv.cox.net) has joined #ceph
[19:56] <ishkabob> @gregmark
[19:56] <cephalobot> ishkabob: Error: "gregmark" is not a valid command.
[19:56] <ishkabob> gregmark: sorry, forgot IRC commands :)
[19:57] <ishkabob> gregmark: thank you so much for this page, its really illuminating. I was wondering if using this approach, how does one specify the cluster name?
[19:58] <gregmark> ishkabob: dude, I've done that a million times. oddly, it actually ticks off some people.
[19:58] <ishkabob> gregmark: hah, yeah, people are all THIS ISN'T TWITTER!!!!
[19:59] <gregmark> ishkabob: I used ceph-deploy to set up my cluster, but if you run ceph -h it should show you how to do that
[20:00] <ishkabob> gregmark: thanks, yeah we're trying to do this with puppet so I need to learn everything I never wanted to know about Ceph :)
[20:01] * smiley (~smiley@rrcs-208-105-51-49.nyc.biz.rr.com) has left #ceph
[20:01] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has joined #ceph
[20:02] * sagelap (~sage@216.194.44.151) has joined #ceph
[20:06] * TiCPU (~jeromepou@190-130.cgocable.ca) has joined #ceph
[20:06] * terje_ (~joey@63-154-145-89.mpls.qwest.net) has joined #ceph
[20:11] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[20:14] <joshd> mikedawson: pong
[20:15] * terje_ (~joey@63-154-145-89.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[20:15] <mikedawson> joshd: is there a good way to test a qemu package to see if it has your async patch? jamespage pointed me to http://people.canonical.com/~serge/qemu-rbd-async/ which seems to have it. I installed via Serge's PPA at https://launchpad.net/~serge-hallyn/+archive/virt?field.series_filter=raring which may have a newer version. But I'm not sure it has your patch.
[20:17] * joao (~JL@89.181.144.108) Quit (Quit: Leaving)
[20:17] * dpippenger (~riven@tenant.pas.idealab.com) has joined #ceph
[20:18] <joshd> mikedawson: this might do it: strings `which qemu-system-x86_64` | grep rbd_aio_flush
[20:21] <mikedawson> joshd: maybe it doesn't have it: http://pastebin.com/raw.php?i=zw28Jz44
[20:23] <joshd> mikedawson: it could include the async flush patch, but not actually enable it if it's not compiled against new enough ceph
[20:23] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[20:24] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[20:28] * devoid (~devoid@130.202.135.210) has joined #ceph
[20:28] <mikedawson> joshd: hrm. The other package http://people.canonical.com/~serge/qemu-rbd-async/qemu-system-x86_1.4.0+dfsg-1expubuntu5_amd64.deb didn't have rbd_aio_flush either
[20:29] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (Quit: WeeChat 0.3.8)
[20:29] <mikedawson> joshd: are 1.5, 1.5.1, or 1.5.2 tested/advisable?
[20:29] * sagelap1 (~sage@2600:1001:b116:55f3:2def:815e:75b9:3a94) has joined #ceph
[20:32] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[20:32] <joshd> mikedawson: the only bug I've seen about them was some performance issue if you use an old machine type, but I can't find the thread now
[20:32] <mikedawson> joshd: saw that, too
[20:33] <joshd> mikedawson: I'm not sure about libvirt compatibility
[20:33] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[20:35] * sagelap (~sage@216.194.44.151) Quit (Ping timeout: 480 seconds)
[20:38] * scuttlemonkey (~scuttlemo@208.184.126.2) has joined #ceph
[20:38] * ChanServ sets mode +o scuttlemonkey
[20:50] * saabylaptop (~saabylapt@1009ds5-oebr.1.fullrate.dk) has joined #ceph
[20:54] * saabylaptop (~saabylapt@1009ds5-oebr.1.fullrate.dk) Quit ()
[20:57] * rudolfsteiner (~federicon@200.68.116.185) has joined #ceph
[21:03] * scuttlemonkey (~scuttlemo@208.184.126.2) Quit (Ping timeout: 480 seconds)
[21:05] * diegows (~diegows@200.68.116.185) has joined #ceph
[21:11] * aliguori (~anthony@32.97.110.51) Quit (Remote host closed the connection)
[21:14] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[21:16] * markbby (~Adium@168.94.245.3) has joined #ceph
[21:20] * dxd828 (~dxd828@host-2-97-70-33.as13285.net) has joined #ceph
[21:20] * sagelap1 (~sage@2600:1001:b116:55f3:2def:815e:75b9:3a94) Quit (Read error: Connection reset by peer)
[21:26] * sagelap1 (~sage@2600:1001:b116:55f3:c685:8ff:fe59:d486) has joined #ceph
[21:29] * ishkabob (~c7a82cc0@webuser.thegrebs.com) Quit (Quit: TheGrebs.com CGI:IRC)
[21:30] * sagelap1 (~sage@2600:1001:b116:55f3:c685:8ff:fe59:d486) Quit (Read error: Connection reset by peer)
[21:32] * terje (~joey@63-154-145-89.mpls.qwest.net) has joined #ceph
[21:40] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[21:40] * terje (~joey@63-154-145-89.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[21:41] * markbby (~Adium@168.94.245.3) Quit (Ping timeout: 480 seconds)
[21:43] * SvenPHX (~scarter@wsip-174-79-34-244.ph.ph.cox.net) has left #ceph
[21:43] <paravoid> hey, 0.67-rc3 release announcement went out but no rc3 packages exist in http://ceph.com/debian-testing/
[21:44] <paravoid> nor -rc3 release notes exist in http://ceph.com/docs/master/release-notes/
[21:45] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[21:50] * SvenPHX (~scarter@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[21:51] * SvenPHX (~scarter@wsip-174-79-34-244.ph.ph.cox.net) has left #ceph
[22:06] * Vjarjadian (~IceChat77@90.214.208.5) has joined #ceph
[22:13] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[22:14] * xmltok (~xmltok@pool101.bizrate.com) has joined #ceph
[22:18] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[22:25] * markbby (~Adium@168.94.245.3) has joined #ceph
[22:26] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:42] * terje (~joey@63-154-135-113.mpls.qwest.net) has joined #ceph
[22:49] * mozg (~andrei@host109-151-35-94.range109-151.btcentralplus.com) has joined #ceph
[22:50] * terje (~joey@63-154-135-113.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[22:52] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[22:54] * kyann (~oftc-webi@did75-15-88-160-187-237.fbx.proxad.net) has joined #ceph
[22:57] * rudolfsteiner (~federicon@200.68.116.185) Quit (Quit: rudolfsteiner)
[23:10] <sjustlaptop> loicd: added some information about peering and pg log selection, erasure coded pgs and replicated pgs have somewhat different requirements for completing peering and for authoritative log selection, apparently
[23:17] * terje_ (~joey@63-154-135-113.mpls.qwest.net) has joined #ceph
[23:23] * terje (~joey@63-154-135-113.mpls.qwest.net) has joined #ceph
[23:25] * nwf_ (~nwf@67.62.51.95) has joined #ceph
[23:26] * terje_ (~joey@63-154-135-113.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[23:26] * nwf (~nwf@67.62.51.95) Quit (Read error: Connection reset by peer)
[23:31] * terje (~joey@63-154-135-113.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[23:33] * joao (~JL@89.181.144.108) has joined #ceph
[23:33] * ChanServ sets mode +o joao
[23:35] * loicd reading
[23:36] * saabylaptop (~saabylapt@1009ds5-oebr.1.fullrate.dk) has joined #ceph
[23:37] <joao> kyann, around?
[23:39] <kyann> joao: yes
[23:39] * saabylaptop (~saabylapt@1009ds5-oebr.1.fullrate.dk) Quit ()
[23:40] * sagelap (~sage@216.194.44.151) has joined #ceph
[23:42] * DarkAce-Z is now known as DarkAceZ
[23:47] * lautriv (~lautriv@f050082253.adsl.alicedsl.de) has joined #ceph
[23:48] <loicd> sjustlaptop: what is the purpose of a pgtemp ?
[23:48] <sjustlaptop> loicd: acting is [0,1,2] and we are active+clean
[23:48] <sjustlaptop> something happens and acting is now [3,1,2]
[23:48] <sjustlaptop> osd 3 is empty and doesn't know anything
[23:48] <sjustlaptop> can't serve reads
[23:49] <sjustlaptop> and doesn't know which objects are in the pg
[23:49] <sjustlaptop> so osd.3 will see that and request a pg temp of [1,2,3]
[23:49] <sjustlaptop> with osd.1 as primary
[23:49] <sjustlaptop> osd.1 will become primary and select osd.3 as a backfill peer
[23:49] <sjustlaptop> and continue to serve reads and writes while osd.3 is backfilled
[23:49] <loicd> understood, thanks :-)
[23:50] <sjustlaptop> yup
[23:50] <lautriv> *wave* me again ;) since i solved all bugs, it seems to be fine so far, but i get mount error 1 = Operation not permitted which looks like the wrong key o.O could it happen a key was stored somewhere outside /var/lib/ceph/* or /etc/ceph ?
[23:51] * mschiff (~mschiff@port-36117.pppoe.wtnet.de) has joined #ceph
[23:52] * nwf_ (~nwf@67.62.51.95) Quit (Ping timeout: 480 seconds)
[23:53] * dxd828 (~dxd828@host-2-97-70-33.as13285.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[23:53] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[23:53] * jskinner (~jskinner@69.170.148.179) Quit (Remote host closed the connection)
[23:53] * bandrus (~Adium@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[23:56] <lautriv> ok, forget that question, it was a newline confuding :)
[23:56] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.