#ceph IRC Log

Index

IRC Log for 2013-07-29

Timestamps are in GMT/BST.

[0:02] * Vjarjadian (~IceChat77@90.214.208.5) Quit (Quit: I cna ytpe 300 wrods pre mniuet!!!)
[0:03] * Vjarjadian (~IceChat77@90.214.208.5) has joined #ceph
[0:06] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[0:07] <sage> loicd: which cache?
[0:09] * mozg (~andrei@host109-151-35-94.range109-151.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[0:28] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:40] * LeaChim (~LeaChim@0540adc6.skybroadband.com) Quit (Ping timeout: 480 seconds)
[0:52] * lautriv (~lautriv@f050081250.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[1:00] * lautriv (~lautriv@f050082152.adsl.alicedsl.de) has joined #ceph
[1:02] * danieagle (~Daniel@186.214.77.206) Quit (Quit: Inte+ :-) e Muito Obrigado Por Tudo!!! ^^)
[1:11] * AfC (~andrew@2001:44b8:31cb:d400:bc9c:a858:863c:7398) has joined #ceph
[1:13] * sleinen (~Adium@2001:620:0:25:286a:8328:124:5401) Quit (Quit: Leaving.)
[1:13] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[1:21] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:38] * markl (~mark@tpsit.com) Quit (Ping timeout: 480 seconds)
[1:43] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Read error: Connection reset by peer)
[2:16] * DaChun (~quassel@222.76.56.254) has joined #ceph
[2:23] * huangjun (~kvirc@111.175.165.62) has joined #ceph
[2:42] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[2:44] * AfC (~andrew@2001:44b8:31cb:d400:bc9c:a858:863c:7398) Quit (Quit: Leaving.)
[2:49] <loicd> sage: the cached value that has blockdev --getsize64 /dev/rbd0 return the previous size after a rbd resize when /dev/rbd0 contains a mounted file system
[2:51] * DaChun (~quassel@222.76.56.254) Quit (Ping timeout: 480 seconds)
[3:17] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:19] * yy-nm (~chatzilla@115.198.96.222) has joined #ceph
[3:34] <lautriv> ok, no idea why that happens, if i prepare a disk i see this :
[3:34] <lautriv> Sector size (logical/physical): 512B/512B
[3:34] <lautriv> Partition Table: gpt
[3:34] <lautriv> Number Start End Size File system Name Flags
[3:34] <lautriv> 17.4kB 1049kB 1031kB Free Space
[3:34] <lautriv> 1 1049kB 147GB 147GB ceph data
[3:34] <lautriv> but right after it comes back from disk prepare, i have unrecognised disk label ?
[3:37] * jluis (~JL@89.181.148.68) Quit (Ping timeout: 480 seconds)
[3:37] <lurbs> loicd: Does 'blockdev --rereadpt /path/to/rbd' help?
[3:39] <loicd> lurbs: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-June/002425.html says it won't help but I'll try anyway ;-)
[3:40] <loicd> sudo blockdev --getsize64 /dev/rbd0
[3:40] <loicd> 52428800000
[3:40] <loicd> sudo lsblk /dev/rbd0
[3:40] <loicd> NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
[3:40] <loicd> rbd0 251:0 0 58.6G 0 disk /mnt
[3:40] <loicd> (this one is accurate)
[3:41] <loicd> sudo blockdev --rereadpt /dev/rbd0
[3:41] <loicd> sudo blockdev --getsize64 /dev/rbd0
[3:41] <loicd> 52428800000
[3:41] <loicd> no change
[3:41] <loicd> it does change when I umount /mnt though
[3:43] <loicd> the trick seems to trigger "something" that make it so that ioctl(GETBLKSIZE64) finds the new size instead of the old size
[3:50] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:15] * DarkAce-Z (~BillyMays@50.107.55.36) has joined #ceph
[4:18] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[4:24] * markbby (~Adium@168.94.245.3) has joined #ceph
[4:26] * DarkAce-Z is now known as DarkAceZ
[4:28] * julian (~julianwa@125.70.133.36) has joined #ceph
[4:49] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:56] * AfC (~andrew@jim1020952.lnk.telstra.net) has joined #ceph
[5:01] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[5:04] * markbby (~Adium@168.94.245.3) has joined #ceph
[5:05] * fireD (~fireD@93-142-241-203.adsl.net.t-com.hr) has joined #ceph
[5:07] * fireD_ (~fireD@93-139-141-122.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:10] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:29] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[5:48] * root_ (~chatzilla@218.94.22.130) Quit (Ping timeout: 480 seconds)
[5:51] * AfC (~andrew@jim1020952.lnk.telstra.net) Quit (Quit: Leaving.)
[6:14] * ShaunR (~ShaunR@staff.ndchost.com) Quit (Read error: Connection reset by peer)
[6:14] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[6:16] * glowell1 (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) has joined #ceph
[6:16] * glowell1 (~glowell@c-98-210-224-250.hsd1.ca.comcast.net) Quit ()
[6:49] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[7:04] * capri (~capri@212.218.127.222) Quit (Quit: Verlassend)
[7:04] * capri (~capri@212.218.127.222) has joined #ceph
[7:12] * AfC (~andrew@2001:44b8:31cb:d400:88e5:11d7:a40f:fa9) has joined #ceph
[8:03] * jaydee (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) has joined #ceph
[8:05] * silversurfer (~jeandanie@124x35x46x12.ap124.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[8:19] * bergerx_ (~bekir@78.188.101.175) has joined #ceph
[8:20] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:48] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) has joined #ceph
[8:49] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit ()
[9:04] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[9:05] * sleinen1 (~Adium@2001:620:0:26:b0c9:58fc:2f34:d4d4) has joined #ceph
[9:06] * sleinen1 (~Adium@2001:620:0:26:b0c9:58fc:2f34:d4d4) Quit ()
[9:06] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Read error: Connection reset by peer)
[9:10] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:15] * cjh_ (~cjh@ps123903.dreamhost.com) Quit (Ping timeout: 480 seconds)
[9:22] * jaydee (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) Quit (Quit: Konversation terminated!)
[9:22] * jaydee (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) has joined #ceph
[9:27] * frank9999 (~frank@kantoor.transip.nl) has joined #ceph
[9:28] * silversurfer (~jeandanie@124x35x46x12.ap124.ftth.ucom.ne.jp) has joined #ceph
[9:28] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[9:32] * jaydee (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[9:35] * jaydee (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) has joined #ceph
[9:36] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[9:38] * silversurfer (~jeandanie@124x35x46x12.ap124.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[9:41] * odyssey4me (~odyssey4m@165.233.205.190) has joined #ceph
[9:52] * LeaChim (~LeaChim@0540adc6.skybroadband.com) has joined #ceph
[9:54] * jbd_ (~jbd_@34322hpv162162.ikoula.com) has joined #ceph
[9:56] * syed_ (~chatzilla@1.22.219.17) has joined #ceph
[9:56] * waxzce (~waxzce@glo44-2-82-225-224-38.fbx.proxad.net) Quit (Remote host closed the connection)
[9:59] * ccourtaut (~ccourtaut@2001:41d0:1:eed3::1) has joined #ceph
[9:59] <ccourtaut> morning
[10:00] * frank9999 (~frank@kantoor.transip.nl) Quit ()
[10:06] * jcfischer (~fischer@peta-dhcp-13.switch.ch) Quit (Ping timeout: 480 seconds)
[10:08] <huangjun> can we bind process in ceph like ceph-osd to an system user?
[10:12] <syed_> huangjun: system user ?
[10:15] <huangjun> a user i create bu useradd
[10:21] <lautriv> can i create the initial content of a disk with something else that disk prepare ?
[10:21] * julian (~julianwa@125.70.133.36) Quit (Read error: Connection reset by peer)
[10:22] * julian (~julianwa@125.70.133.36) has joined #ceph
[10:25] <huangjun> lautriv: you mean create a osd with data?
[10:25] <lautriv> huangjun, yes because disk prepare damages my disklabel on the end of that process. ( only on one box )
[10:28] <lautriv> huangjun, see if i do either disk prepare or osd create --zap-disk on this drives, i have this :
[10:28] <lautriv> Sector size (logical/physical): 512B/512B
[10:28] <lautriv> Partition Table: gpt
[10:28] <lautriv> Number Start End Size File system Name Flags
[10:28] <lautriv> 17.4kB 1049kB 1031kB Free Space
[10:28] <lautriv> 1 1049kB 147GB 147GB ceph data
[10:28] <lautriv> which is fine, but if the command comes back, i have unrecognized disklable
[10:30] <huangjun> so you can not create successfully?
[10:30] <huangjun> can you show the deploy commands
[10:31] <lautriv> but only on this box, another box is fine, all tools same version controller LSI-Logic, drives Seagate Enterprise-Class works well on anything but ceph.
[10:33] <lautriv> example-command ( tried all possibilities ) : ceph-deploy osd create --zap-disk node001:sdc:/dev/sdb1 node001:sdd:/dev/sdb2
[10:36] <huangjun> uhh,we don't use this very much, and found that, it more possible success to create the osd by :ceph-deploy disk zap host:/dev/sdX ; osd prepare host/dev/sdX; osd activate host:/dev/sdX;
[10:37] <lautriv> huangjun, like i said i tried all possibilities and it damages the disklable whenever prepare completes.
[10:38] * odyssey4me (~odyssey4m@165.233.205.190) Quit (Ping timeout: 480 seconds)
[10:40] <huangjun> the disklabel will destroy when disk is formatted,like uuid
[10:43] <lautriv> yes, i get a new GPT by zap and a new partition by prepare. when i "parted /dev/sdX print free" while the prepaere runs, it is fine but damages it on the end of prepare.
[10:43] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) has joined #ceph
[10:43] * waxzce (~waxzce@office.clever-cloud.com) has joined #ceph
[10:46] <lautriv> huangjun, example before prepare finished :
[10:46] <lautriv> Disk /dev/sdc: 73.4GB
[10:46] <lautriv> Sector size (logical/physical): 512B/512B
[10:46] <lautriv> Partition Table: gpt
[10:46] <lautriv> Number Start End Size File system Name Flags
[10:46] <lautriv> 17.4kB 1049kB 1031kB Free Space
[10:46] <lautriv> 1 1049kB 73.4GB 73.4GB ceph data
[10:48] * waxzce_ (~waxzce@office.clever-cloud.com) has joined #ceph
[10:48] * waxzce (~waxzce@office.clever-cloud.com) Quit (Read error: Connection reset by peer)
[10:48] <lautriv> huangjun, that is absolutely correct but vanishes before the command comes back, i assume some miscalculation after the sgdisk call where ceph does fill the secondary label.
[10:49] <huangjun> so you can debug this by add the debug filename in /usr/sbin/ceph-disk main function
[10:51] * silversurfer (~jeandanie@124x35x46x12.ap124.ftth.ucom.ne.jp) has joined #ceph
[10:51] <huangjun> change that line: logging.basicConfig(filename="/path/to/logfile",level=loglevel,)
[10:54] * jaydee (~jeandanie@124x35x46x15.ap124.ftth.ucom.ne.jp) Quit (Ping timeout: 480 seconds)
[10:56] <lautriv> have to go, will report back in a few h.
[10:56] * Cube (~Cube@cpe-76-95-217-129.socal.res.rr.com) Quit (Quit: Leaving.)
[10:57] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[10:58] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) Quit (Remote host closed the connection)
[10:58] * maciek (maciek@2001:41d0:2:2218::dead) has joined #ceph
[10:59] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) has joined #ceph
[11:00] * waxzce_ (~waxzce@office.clever-cloud.com) Quit (Remote host closed the connection)
[11:00] * waxzce (~waxzce@2a01:e34:ee97:c5c0:9005:4a1b:7256:3dac) has joined #ceph
[11:01] * jjgalvez1 (~jjgalvez@ip72-193-215-88.lv.lv.cox.net) Quit (Quit: Leaving.)
[11:13] * grepory (~Adium@c-69-181-42-170.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[11:16] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) has joined #ceph
[11:27] * ab__ (~oftc-webi@gw.vpn.autistici.org) has joined #ceph
[11:27] <ab__> hello
[11:28] * syed_ (~chatzilla@1.22.219.17) Quit (Ping timeout: 480 seconds)
[11:45] * syed_ (~chatzilla@1.22.219.17) has joined #ceph
[11:48] <ab__> ceph uses iscsi or other?
[12:01] <huangjun> ab__: you can use iscsi over ceph rbd
[12:02] <ab__> what is a good way for performance? is good to use iscsi?
[12:04] <huangjun> the iscsi client on linux can get better performance
[12:05] <ab__> so is a good idea to create a rbd images and use they as iscsi device?
[12:08] <huangjun> yes, and it very useful in virtualization enviorment, you can try it
[12:09] <ab__> good, i want to try it on guest os's, but i need to enable iscsi on host os, yea?
[12:15] <ab__> well. i'm new on ceph, and i don't know how it works in the internals. i though that iscsi is the lower layer near the phisical device (/dev/sdX) and ceph runs over it, but i'm seeing that my vision is wrong
[12:30] <huangjun> you create a ceph rbd image and then export to iscsi client as an disk(just like the disk on you local host), then you format the disk and use it
[12:31] <mattch> Having some problems trying out ceph-deploy... when I get to doing 'ceph-deploy osd create...' I get the error: raise RuntimeError('bootstrap-osd keyring not found; run \'gatherkeys\''). When I run 'ceph-deploy gatherkeys...' I get the error: Unable to find /var/lib/ceph/bootstrap-osd/ceph.keyring on ['x', 'y', 'z']
[12:37] <huangjun> are you use the hostname as x,y,z?
[12:41] * jluis (~JL@89.181.148.68) has joined #ceph
[12:44] <mattch> huangjun: No, sorry, I've just replaced the actual hostnames for privacy :)
[12:44] <mattch> using space-separated hostnames on the cmd-line in all commands
[12:45] * yy-nm (~chatzilla@115.198.96.222) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 22.0/20130618035212])
[12:45] <huangjun> try to ue x y z or {x y z}
[12:47] <mattch> huangjun: I am
[12:50] * huangjun (~kvirc@111.175.165.62) Quit (Quit: KVIrc 4.2.0 Equilibrium http://www.kvirc.net/)
[12:55] <mattch> It looks like ceph-create-keys isn't being run/is failing during the ceph mon create step. Running it manually (ceph-create-keys -i x y z) spits out keys into /var/lib/ceph/bootstrap-osd on all hosts
[12:55] <mattch> I mean 'ceph-deploy mon create'
[12:57] <mattch> Will try updating to the latest versions of everything and seeing if that fixes things
[12:57] * DaChun (~quassel@222.76.56.254) has joined #ceph
[13:02] * Vjarjadian (~IceChat77@90.214.208.5) Quit (Quit: IceChat - Keeping PC's cool since 2000)
[13:08] * DaChun_ (~quassel@27.151.94.141) has joined #ceph
[13:09] * DaChun (~quassel@222.76.56.254) Quit (Ping timeout: 480 seconds)
[13:11] * DaChun (~quassel@27.151.94.141) has joined #ceph
[13:14] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[13:15] * syed_ (~chatzilla@1.22.219.17) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 22.0/20130627172038])
[13:17] * DaChun_ (~quassel@27.151.94.141) Quit (Ping timeout: 480 seconds)
[13:19] * ab__ (~oftc-webi@gw.vpn.autistici.org) Quit (Remote host closed the connection)
[13:20] * DaChun (~quassel@27.151.94.141) Quit (Ping timeout: 480 seconds)
[13:22] * dobber (~dobber@213.169.45.222) has joined #ceph
[13:34] * JM__ (~oftc-webi@193.252.138.241) has joined #ceph
[13:39] * yanzheng (~zhyan@134.134.139.70) has joined #ceph
[13:48] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) has joined #ceph
[13:49] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[13:53] * lupine (~lupine@lupine.me.uk) Quit (Ping timeout: 480 seconds)
[13:54] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[13:59] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[14:04] <r0r_taga> if i'm seeing really poor performance with RBD over a network, but not point of the network is saturated, are there any considerations i should be aware of
[14:04] <r0r_taga> local tests are what i expecting
[14:04] <r0r_taga> < was
[14:09] * stxShadow (~Jens@ip-88-152-161-249.unitymediagroup.de) has joined #ceph
[14:10] <mozg> what kind of network do you have?
[14:10] <mozg> and how do you run your tests?
[14:10] <r0r_taga> this is just a POC, it's some stuff plugged into a gb procuve
[14:10] <r0r_taga> doing lots of tests, throughput, IOPS, bunch of stuff
[14:10] <mozg> r0r_taga: what i've noticed is that performance is okay if you use large block sizes
[14:10] <mozg> but for 4K tests it's not good at all
[14:11] <r0r_taga> i was hitting a limit of some kind, assumed it was Ceph
[14:11] <r0r_taga> as the network was fine
[14:11] <mozg> what performance figures were you getting?
[14:11] <mozg> and what were you expecting
[14:12] <mozg> ceph performance really depends on how your cluster is setup
[14:12] <mozg> how many osds you have
[14:12] <mozg> and if you use ssd for journaling
[14:12] <mozg> i am doing poc as well
[14:12] <mozg> testing stuff at the moment
[14:12] <r0r_taga> this was all SSD
[14:12] <mozg> and i can easily saturate 10G network
[14:13] <mozg> ah, you should be getting good speeds
[14:13] <mozg> how do you run your tests?
[14:13] <mozg> dd?
[14:13] <r0r_taga> i'll dig out the numbers, need to get some data from college
[14:13] <mozg> or other tool?
[14:13] <r0r_taga> dd, some fio tests, bunch of other stuff
[14:13] <mozg> how do you mount ceph?
[14:14] <mozg> are your clients vms or physical servers?
[14:15] <mozg> i am using virtual machines on top of ceph
[14:15] <mozg> kvm + rbd
[14:15] <r0r_taga> i have some hypervisors using rbd, i've run tests from the hypervidor dom0 and vms
[14:15] <mozg> what i've noticed is that performance of a single thread is not impressive
[14:15] <mozg> like if i run single dd
[14:15] <mozg> i would get around 100-120MB/s
[14:16] <mozg> however, if I run 16 dds in parallel, i would get around 1.2-1.4GB/s
[14:16] * DarkAce-Z (~BillyMays@50.107.55.36) has joined #ceph
[14:16] <r0r_taga> it was feasibility study for using Ceph to build a SAN of sorts of a Xen environment (and also hyper-v using some kind of iSCSI proxy, but i havent got that far yet)
[14:16] <mozg> well
[14:16] <mozg> you will need to enable rbd caching
[14:16] <mozg> to improve performance
[14:16] <mozg> not sure of xen / hyper-v
[14:17] <mozg> but i was using kvm
[14:17] * lupine (~lupine@lupine.me.uk) has joined #ceph
[14:17] <mozg> is there xen support for ceph storage?
[14:17] <r0r_taga> not at this time
[14:17] <mozg> i've read that there is experimental support for xenserver
[14:18] <mozg> so, there got to be some work done with xen as well
[14:20] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[14:21] * agh (~oftc-webi@gw-to-666.outscale.net) has joined #ceph
[14:21] <agh> Hi to all. I've an issue
[14:22] <agh> i did that :
[14:22] <agh> rados mkpool s3 3600 3600
[14:22] <agh> but, when I do a ceph -s to check the state of the cluster, i have this :
[14:22] <agh> pgmap v2766: 240 pgs: 240 active+clean; 374 bytes data, 2140 MB used, 174 TB / 174 TB avail
[14:23] <agh> I do not understand the "240pgs". I should have lots more isn'tit ?
[14:26] <agh> nobody ?
[14:27] <mozg> agh: you are right, you should
[14:27] <mozg> i've done this a few days ago on 0.61.7 and not had any issues
[14:27] <mozg> my pgs were showing up
[14:27] * AfC (~andrew@2001:44b8:31cb:d400:88e5:11d7:a40f:fa9) Quit (Ping timeout: 480 seconds)
[14:27] <agh> i also did it on another cluster without problem...
[14:29] <agh> mm
[14:29] <agh> and when i do
[14:29] <agh> ceph osd pool get s3 pg_num
[14:29] <agh> I get "8"... not 3600
[14:30] <agh> oops it was my fault
[14:31] <agh> don't use rados mkpool
[14:31] <agh> but
[14:31] <agh> ceph osd pool create s3 3600 3600
[14:31] <agh> then it works
[14:31] <mozg> ))
[14:45] * stxShadow (~Jens@ip-88-152-161-249.unitymediagroup.de) Quit (Read error: Connection reset by peer)
[14:48] <tchmnkyz> I know this is a long shot but is anyone in here Alex from inktank? i need some more info to book his flight.
[14:49] <janos> tchmnkyz: it's probably a little early for his timezone
[14:49] <tchmnkyz> o ok
[14:50] <tchmnkyz> does he frequent here though?
[14:51] * jluis (~JL@89.181.148.68) Quit (Ping timeout: 480 seconds)
[14:55] <janos> not sure. but i think he's california timezone, so a few hours to go for morning
[14:56] <tchmnkyz> yea he is. ome times the cali guys are up early is all
[14:56] <tchmnkyz> figured it was worth a try
[14:56] <janos> always worth a try!
[14:56] <janos> :)
[14:56] * diegows (~diegows@190.190.2.126) has joined #ceph
[14:57] * mschiff (~mschiff@port-90731.pppoe.wtnet.de) has joined #ceph
[14:58] * Psi-Jack_ (~Psi-Jack@yggdrasil.hostdruids.com) has joined #ceph
[14:59] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[15:00] * AfC (~andrew@2001:44b8:31cb:d400:38ca:4d4c:1dfa:6176) has joined #ceph
[15:03] * smiley (~smiley@pool-173-73-0-53.washdc.fios.verizon.net) Quit (Quit: smiley)
[15:04] * xdeller (~xdeller@91.218.144.129) has joined #ceph
[15:07] <mozg> does anyone know if i can mix sata and sas drives on the same osd server?
[15:07] <mozg> i've got 4 sas drives and i would like to get 4 more drives
[15:07] <mozg> thought to save money
[15:08] <mozg> and get sata drives
[15:08] <mozg> they are about 30% cheaper i think
[15:12] <tchmnkyz> mozg: i have not seen anything on doing it that way. I personally would not, but you could do tests to see how it works.
[15:17] <jeff-YF> i have a somewhat related question. If I want to add SSD's to ceph. Is it possible to create a pool that would only use the OSD's that I have configured to map to the SSD drives?
[15:19] <mozg> jeff: i would like an answer to that one as well
[15:19] <mozg> as i would love to have several storage tiers
[15:19] <mozg> fast and slow
[15:19] <mozg> ))
[15:20] <janos> there is an example in the docs for that i thought
[15:20] <janos> that can be done
[15:20] <ccourtaut> http://ceph.com/docs/master/rados/operations/pools/#setpoolvalues
[15:21] <ccourtaut> this one links to the crush_ruleset you can map to a pool
[15:21] <ccourtaut> http://ceph.com/docs/master/rados/operations/crush-map/
[15:21] <jeff-YF> ahh.. yes.. i see theres a section there for placing different pools on different OSD's
[15:21] <jeff-YF> perfect
[15:25] <jeff-YF> a bit complicated though it seems..lol
[15:25] <ccourtaut> yes over here http://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds
[15:26] <jeff-YF> the SSD's don't have to be on a different ceph host than the regular drives do they?
[15:26] <ccourtaut> the main idea here is to have your various item declare
[15:26] <ccourtaut> regroup them under a root
[15:27] <ccourtaut> and take the corresponding root with crush
[15:28] <ccourtaut> hum don't know about that,
[15:28] <ccourtaut> maybe you can go down to osd level, but the example is indeed done with hosts
[15:31] * markbby (~Adium@168.94.245.3) has joined #ceph
[15:31] * LeaChim (~LeaChim@0540adc6.skybroadband.com) Quit (Ping timeout: 480 seconds)
[15:31] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[15:34] * nwat (~nwat@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[15:43] <jeff-YF> just found this which seems to show how to do it at the OSD level. http://www.sebastien-han.fr/blog/2012/12/07/ceph-2-speed-storage-with-crush/
[15:47] * yanzheng (~zhyan@134.134.139.70) Quit (Remote host closed the connection)
[15:48] * jluis (~JL@89.181.148.68) has joined #ceph
[15:52] * agh (~oftc-webi@gw-to-666.outscale.net) Quit (Quit: Page closed)
[16:04] * BillK (~BillK-OFT@124-169-67-32.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[16:06] <madkiss1> so what could cause an iSCSI target to provide only 10% of the performance of the underlying block device?
[16:06] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[16:06] <mattch> This might be a daft question, but when I do 'ceph-deploy osd create ...' ceph.conf doesn't seem to get updated. How does it 'know' to start this osd on next boot?
[16:19] * sprachgenerator (~sprachgen@c-50-141-192-36.hsd1.il.comcast.net) Quit (Quit: sprachgenerator)
[16:21] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) has joined #ceph
[16:29] <mattch> Also, how do I specify the 'cluster addr' value for an osd? do I have to manually create a ceph.conf section for each osd created by ceph-deploy?
[16:41] <lyncos> sagewk are you online ?
[16:42] <lyncos> Hi dmick ... on friday you did start helping me with an OSD using more than 100% CPU then crashing ... you still up to this ?
[16:50] * waxzce (~waxzce@2a01:e34:ee97:c5c0:9005:4a1b:7256:3dac) Quit (Ping timeout: 480 seconds)
[16:50] * yanzheng (~zhyan@jfdmzpr05-ext.jf.intel.com) Quit (Remote host closed the connection)
[17:04] * jluis (~JL@89.181.148.68) Quit (Ping timeout: 480 seconds)
[17:05] * jeff-YF (~jeffyf@216.14.83.26) has joined #ceph
[17:13] * jluis (~JL@89.181.148.68) has joined #ceph
[17:15] * sprachgenerator (~sprachgen@130.202.135.206) has joined #ceph
[17:19] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[17:24] * sagelap (~sage@2600:1012:b028:171f:5c19:953a:c0a9:a820) has joined #ceph
[17:26] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:26] <off_rhoden> mattch: the Upstart scripts try to start any Ceph OSD they find on the system automatically. It doesn't rely on them being present in the ceph.conf file anymore. That was only ever necessary when using the older mkcephfs-created clusters.
[17:27] <off_rhoden> I *think* the newer sysvinit scripts try to do it too, but am not positive on that one.
[17:27] * jeff-YF (~jeffyf@216.14.83.26) Quit (Quit: jeff-YF)
[17:28] * sagelap (~sage@2600:1012:b028:171f:5c19:953a:c0a9:a820) Quit (Read error: Connection reset by peer)
[17:30] * devoid (~devoid@130.202.135.213) has joined #ceph
[17:32] * AfC (~andrew@2001:44b8:31cb:d400:38ca:4d4c:1dfa:6176) Quit (Ping timeout: 480 seconds)
[17:33] * stepan_cz (~Adium@2a01:348:94:30:d185:e448:9da0:b0dc) has joined #ceph
[17:39] <stepan_cz> Hi, I'm trying to setup testing installation of Ceph (0.61.6, on CentOs 6.4, using official RPMs), problem is I always end up with some PGs marked as "active+degraded" - usually about 1/10th, no matter what I do. I have two OSDs, I've tried things like deleting & recreating the pool, updating the CRUSH map…. problem seems to be that these PGs are alway assigned only to one OSD, no idea why… any help would be more than appreciated :),
[17:41] * ScOut3R (~ScOut3R@catv-89-133-17-71.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:44] <joao> stepan_cz, http://ceph.com/docs/master/rados/operations/crush-map/#tunables
[17:46] * sagelap (~sage@2600:1012:b028:171f:b1dd:2879:bafb:4558) has joined #ceph
[17:47] <stepan_cz> cool, thanks a lot joao, will try to play with these, seems like my problem described under "impact of legacy values"
[17:47] * jeff-YF (~jeffyf@216.14.83.26) has joined #ceph
[17:48] <sagelap> glowell: there? canyou look at my fix for http://tracker.ceph.com/issues/5779 ?
[17:48] <sagelap> glowell: also i fixed the broken changelog link, btw.. the file was just missing, forgot to push it
[17:51] * DarkAce-Z is now known as DarkAceZ
[17:52] * waxzce (~waxzce@2a01:e34:ee97:c5c0:a83c:82c8:4b65:c134) has joined #ceph
[17:56] * jluis (~JL@89.181.148.68) Quit (Ping timeout: 480 seconds)
[17:56] * gregaf (~Adium@38.122.20.226) Quit (Quit: Leaving.)
[18:00] * gregaf (~Adium@38.122.20.226) has joined #ceph
[18:00] * dmick (~dmick@2607:f298:a:607:99c2:3046:25c8:89c) Quit (Ping timeout: 480 seconds)
[18:01] * jeff-YF (~jeffyf@216.14.83.26) Quit (Quit: jeff-YF)
[18:06] * saml (~sam@adfb12c6.cst.lightpath.net) has joined #ceph
[18:07] * sagelap (~sage@2600:1012:b028:171f:b1dd:2879:bafb:4558) Quit (Ping timeout: 480 seconds)
[18:08] <saml> is this good?
[18:10] <joao> depends on how it's cooked
[18:11] <saml> if i mount ceph filesystem and have multiple processes write to the same path, is it okay?
[18:11] <saml> maybe i'll actually try and see
[18:13] * portante|afk is now known as portante
[18:14] * dmick (~dmick@38.122.20.226) has joined #ceph
[18:16] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[18:17] * jluis (~JL@89.181.148.68) has joined #ceph
[18:20] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) has joined #ceph
[18:21] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[18:26] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[18:27] * bergerx_ (~bekir@78.188.101.175) Quit (Quit: Leaving.)
[18:32] * cjh_ (~cjh@ps123903.dreamhost.com) has joined #ceph
[18:34] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[18:44] * markl (~mark@tpsit.com) has joined #ceph
[18:48] <glowell> sagelap: got it
[18:51] * mozg (~andrei@host217-46-236-49.in-addr.btopenworld.com) Quit (Ping timeout: 480 seconds)
[18:56] * markbby (~Adium@168.94.245.3) has joined #ceph
[19:03] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) Quit (Quit: Leaving.)
[19:06] * devoid (~devoid@130.202.135.213) Quit (Ping timeout: 480 seconds)
[19:06] * stepan_cz (~Adium@2a01:348:94:30:d185:e448:9da0:b0dc) Quit (Quit: Leaving.)
[19:21] * SvenPHX (~scarter@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[19:22] * SvenPHX (~scarter@wsip-174-79-34-244.ph.ph.cox.net) has left #ceph
[19:30] * sagelap (~sage@38.122.20.226) has joined #ceph
[19:33] * lyncos (~chatzilla@208.71.184.41) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 22.0/20130627161625])
[19:36] * dpippenger (~riven@tenant.pas.idealab.com) has joined #ceph
[19:39] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has joined #ceph
[19:45] * devoid (~devoid@130.202.135.213) has joined #ceph
[19:47] * mschiff (~mschiff@port-90731.pppoe.wtnet.de) Quit (Remote host closed the connection)
[20:01] <sjust> loicd: merged your sharedptr_registry unit tests
[20:02] * dontalton (~don@rtp-isp-nat1.cisco.com) has joined #ceph
[20:02] <sjust> loicd: 414 is probably ready to go, but needs to be tested first, so that should probably wait for the cluster to be more idle
[20:05] <dontalton> is there some way to get additional log data out of an osd? I enabled debug per the site docs, all my OSD daemons are running, but they never are marked up or available?
[20:08] * saabylaptop (~saabylapt@1009ds5-oebr.1.fullrate.dk) has joined #ceph
[20:11] * davidz (~Adium@ip68-5-239-214.oc.oc.cox.net) Quit (Quit: Leaving.)
[20:12] * jjgalvez (~jjgalvez@ip72-193-215-88.lv.lv.cox.net) has joined #ceph
[20:14] * dxd828 (~dxd828@host-2-97-70-33.as13285.net) has joined #ceph
[20:16] * davidz (~Adium@ip68-5-239-214.oc.oc.cox.net) has joined #ceph
[20:28] * dxd828 (~dxd828@host-2-97-70-33.as13285.net) Quit (Quit: Computer has gone to sleep.)
[20:32] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) Quit (Quit: Leaving.)
[20:32] * grepory (~Adium@50-115-70-146.static-ip.telepacific.net) has joined #ceph
[20:34] * dxd828 (~dxd828@host-2-97-70-33.as13285.net) has joined #ceph
[20:41] * sagelap (~sage@38.122.20.226) Quit (Ping timeout: 480 seconds)
[20:48] * MapspaM is now known as SpamapS
[20:48] * LeaChim (~LeaChim@0540ae5a.skybroadband.com) has joined #ceph
[20:54] * julian (~julianwa@125.70.133.36) Quit (Ping timeout: 480 seconds)
[20:57] * sagelap (~sage@38.122.20.226) has joined #ceph
[20:58] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[21:09] * waxzce (~waxzce@2a01:e34:ee97:c5c0:a83c:82c8:4b65:c134) Quit (Remote host closed the connection)
[21:14] * jeff-YF (~jeffyf@67.23.117.122) Quit (Quit: jeff-YF)
[21:15] <josef> sagewk: much more fighting with this build and you are going to have to put me on your payroll as a contractor
[21:15] <sagewk> works for me :) what is wrong with it?
[21:15] <josef> sagewk: its just little things
[21:16] <josef> first it was a bunch of files no longer existing/new ones
[21:16] <josef> then stuff trying to be installed to /usr/usr/sbin
[21:16] <josef> now it wont build on arm
[21:16] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has left #ceph
[21:16] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[21:16] <josef> i *think* i have it this time
[21:16] * buck (~buck@c-24-6-91-4.hsd1.ca.comcast.net) has joined #ceph
[21:16] <alphe> Babyshambles - Nothing comes to nothing
[21:17] * alphe (~alphe@0001ac6f.user.oftc.net) Quit ()
[21:17] <sagewk> blerg i thought the /usr/usr thing was fixed :/
[21:18] <josef> it is in the development branch
[21:18] <josef> not in cuttlefish
[21:18] * jeff-YF (~jeffyf@216.14.83.26) has joined #ceph
[21:19] <cmdrk> hey guys, curious about some advice. I've got about 25 machines, mostly scrappy compute nodes from a cluster environment. they can each hold 6-8 disks. so i'm planning on around 1 OSD for each, coming out at around 150-200 OSDs once the farm is built. each machine is 4 CPU / 8 GB RAM / 1GbE. recommendations on MDS/MON setup? two or three 10Gb hosts will be mounting CephFS via kernel module
[21:20] <cmdrk> 1 OSD for each disk*
[21:20] <guppy> for mons you want either 3 or 5
[21:20] <cmdrk> alright
[21:20] <guppy> mds, I don't think anymore than 1 mds is currently recommended (tested) ... though more should work I believe
[21:20] <cmdrk> I think I read something about only using 1 mds in production for cuttlefish?
[21:21] <nhm> cmdrk: technically we don't recommend cephfs for production at all. ;)
[21:21] <nhm> dmick: but if you are going to do it, stick with 1 for now.
[21:21] <nhm> oops, that was for cmdrk
[21:21] <sagewk> loicd: there?
[21:23] <cmdrk> sounds good :) I'm just living dangerously a bit, as this will be a 'volatile' best-effort file store. and of course I'm interested in playing with Ceph ;)
[21:24] <cmdrk> safe to put mons on the OSD machines or should I have them on dedicated hosts?
[21:24] <sagewk> safe
[21:24] <nhm> cmdrk: let us know how it ends up going, it's good to hear bug reports for cephfs!
[21:24] <cmdrk> thanks sage
[21:25] * jeff-YF_ (~jeffyf@67.23.123.228) has joined #ceph
[21:25] <nhm> cmdrk: it's safe, but RAM could be an issue, especially if you have 8 OSDs per host.
[21:25] <devoid> nhm: is there a page that describes what services are recommended for production? And in what configurations?
[21:26] <cmdrk> I plan to do a lot of benchmarking -- physics shop around so we like our plots ;)
[21:26] <nhm> devoid: there is: http://ceph.com/docs/master/install/hardware-recommendations/
[21:28] <nhm> devoid: I tend to recommend more memory than our minimums, but during typical operation the minimums will probably work. Mostly.
[21:29] <devoid> nhm: if you look under the http://ceph.com/docs/master/cephfs/ section, there's no indication that you *shouldn't* run more than one MDS
[21:31] * jeff-YF (~jeffyf@216.14.83.26) Quit (Ping timeout: 480 seconds)
[21:31] * jeff-YF_ is now known as jeff-YF
[21:31] * dxd828 (~dxd828@host-2-97-70-33.as13285.net) Quit (Quit: Computer has gone to sleep.)
[21:31] * sagelap (~sage@38.122.20.226) Quit (Quit: Leaving.)
[21:32] <nhm> devoid: We should probably put a big disclaimer up saying that CephFS in general is not recommended for production, and especially not in multi-mds configurations.
[21:32] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[21:32] <nhm> devoid: mind filing a bug in the tracker?
[21:33] <devoid> nhm: sure thing. I was also looking at howto send patches on the docs.
[21:34] <nhm> devoid: I think it's all github. John Wilkins is the guy to talk to (our super amazing docs guy)
[21:34] <grepory> nhm: would you be willing to share your fio configurations you used in your performance tests?
[21:35] <devoid> nhm: thanks. I'd say the docs are extremely good in comparison to other open-source projects.
[21:36] <nhm> gregaf: sure! it's all scripted here: https://github.com/ceph/ceph-tools/blob/master/cbt/benchmark/rbdfio.py
[21:36] <nhm> gregaf: and in kvmrbdfio.py too.
[21:37] <nhm> grepory: oops, that was for you
[21:37] <grepory> nhm: Thanks! :D
[21:37] <nhm> grepory: you should be able to plug in the values from the articles
[21:38] <nhm> grepory: not the most amazing code in the world. :)
[21:38] <devoid> is openid on tracker.ceph.com working?
[21:38] <grepory> nhm: hahaha it's no big deal. it is just nice to have some direction in this particular area. i am pretty green.
[21:38] <nhm> devoid: thanks! I'll let John know his work is loved. :)
[21:39] <nhm> grepory: IOR can be a harsh mistress. It's easy to do something that gives you misleading results.
[21:39] <grepory> nhm: getting ready to run tests on two clusters (the two hardware configurations we're deciding between), so kind of curious to see how they perform differently.
[21:39] <nhm> grepory: sorry, fio. Too many benchmark tools!
[21:39] * Cube (~Cube@66-87-118-175.pools.spcsdns.net) has joined #ceph
[21:39] <grepory> seriously.
[21:40] <grepory> nhm: yes, i first experienced that when it said we had 250 MB/s read throughput, but 2 GB/s write throughput.
[21:40] <nhm> yay cache!
[21:40] <grepory> haha
[21:40] <grepory> yeah
[21:40] <grepory> i have had to… learn.
[21:41] <nhm> grepory: I've been doing this for years and I am still learning (sometimes the hard way!)
[21:41] <grepory> nhm: it's fun :)
[21:43] <jluis> sagewk, ping
[21:43] <sagewk> hey
[21:44] <jluis> wrt caps, 'allow service foo rw'; is 'foo' supposed to be a mon service (auth and the sorts), or wrt a cluster daemon?
[21:44] <jluis> if it's the first as I'm assuming, then something's fishy
[21:44] <sagewk> mon service (mds, osd, auth, etc.)
[21:44] <jluis> okay, that's probably my fault (and if so I may have the solution)
[21:45] <jluis> I'll test it out
[21:45] <sagewk> config-key
[21:45] <sagewk> k
[21:45] <jluis> sagewk, just to confirm, "mon 'allow service auth rw'" is a correctly formed caps, right?
[21:47] <sagewk> yeah
[21:47] <jluis> cool
[21:50] <jluis> alright, works like a charm
[21:50] * KindTwo (KindOne@50.96.224.211) has joined #ceph
[21:50] <jluis> if I could just find a way to finish this unit test it would be swell
[21:51] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:51] * KindTwo is now known as KindOne
[21:59] * Cube1 (~Cube@66-87-118-175.pools.spcsdns.net) has joined #ceph
[22:02] * Cube (~Cube@66-87-118-175.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[22:06] * dxd828 (~dxd828@host-2-97-70-33.as13285.net) has joined #ceph
[22:13] * xmltok_ (~xmltok@pool101.bizrate.com) has joined #ceph
[22:13] * Cube1 (~Cube@66-87-118-175.pools.spcsdns.net) Quit (Read error: No route to host)
[22:13] <sagewk> jluis: there?
[22:22] <jluis> yeah
[22:22] <jluis> sagewk, ^
[22:23] <sagewk> nm :)
[22:24] <jluis> sagewk, default behavior of 'ceph auth add client.foo' should be: 1) squash whatever we have on the keyring with new key; or 2) EEXIST ?
[22:24] <jluis> just patched for 2), currently working as 1)
[22:24] <sagewk> 1
[22:24] <jluis> why?
[22:24] <sagewk> or, return 0 if it matches
[22:25] <sagewk> repeating the same command twice needs to return success
[22:25] * dxd828 (~dxd828@host-2-97-70-33.as13285.net) Quit (Quit: Computer has gone to sleep.)
[22:25] * xmltok_ (~xmltok@pool101.bizrate.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * devoid (~devoid@130.202.135.213) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * lautriv (~lautriv@f050082152.adsl.alicedsl.de) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * infernix (nix@5ED33947.cm-7-4a.dynamic.ziggo.nl) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * X3NQ (~X3NQ@195.191.107.205) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * AaronSchulz (~chatzilla@216.38.130.164) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * guppy (~quassel@guppy.xxx) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * jeroenmoors (~quassel@193.104.8.40) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * jnq (~jon@0001b7cc.user.oftc.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * portante (~portante@nat-pool-bos-t.redhat.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * lmb (lmb@212.8.204.10) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * nigwil (~idontknow@174.143.209.84) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * Jakdaw (~chris@puma-mxisp.mxtelecom.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * jf-jenni (~jf-jenni@stallman.cse.ohio-state.edu) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * Kdecherf (~kdecherf@shaolan.kdecherf.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * off_rhoden (~anonymous@pool-173-79-66-35.washdc.fios.verizon.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * maswan (maswan@kennedy.acc.umu.se) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * sbadia (~sbadia@yasaw.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * soren (~soren@hydrogen.linux2go.dk) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * ivan` (~ivan`@000130ca.user.oftc.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * Azrael (~azrael@terra.negativeblue.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * chutz (~chutz@rygel.linuxfreak.ca) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * yeled (~yeled@spodder.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * davidz (~Adium@ip68-5-239-214.oc.oc.cox.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * dmick (~dmick@38.122.20.226) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * gregaf (~Adium@38.122.20.226) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * Psi-Jack_ (~Psi-Jack@yggdrasil.hostdruids.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * aliguori (~anthony@cpe-70-112-157-87.austin.res.rr.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * silversurfer (~jeandanie@124x35x46x12.ap124.ftth.ucom.ne.jp) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * ShaunR (~ShaunR@staff.ndchost.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * Tamil (~tamil@38.122.20.226) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * jochen (~jochen@laevar.de) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * terje_ (~joey@97-118-115-214.hlrn.qwest.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * NaioN_ (stefan@andor.naion.nl) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * tdb (~tdb@willow.kent.ac.uk) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * Fetch_ (fetch@gimel.cepheid.org) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * axisys (~axisys@ip68-98-189-233.dc.dc.cox.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * \ask (~ask@oz.develooper.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * jks (~jks@3e6b5724.rev.stofanet.dk) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * jeff-YF (~jeffyf@67.23.123.228) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * LeaChim (~LeaChim@0540ae5a.skybroadband.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * saabylaptop (~saabylapt@1009ds5-oebr.1.fullrate.dk) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * dontalton (~don@rtp-isp-nat1.cisco.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * cjh_ (~cjh@ps123903.dreamhost.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * lx0 (~aoliva@lxo.user.oftc.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * sprachgenerator (~sprachgen@130.202.135.206) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * PerlStalker (~PerlStalk@72.166.192.70) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * saml (~sam@adfb12c6.cst.lightpath.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * diegows (~diegows@190.190.2.126) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * Ormod (~valtha@ohmu.fi) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * Meyer^ (meyer@c64.org) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * nwf (~nwf@67.62.51.95) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * codice (~toodles@75-140-71-24.dhcp.lnbh.ca.charter.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * nwl (~levine@atticus.yoyo.org) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * rennu_ (sakari@turn.ip.fi) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * baffle_ (baffle@jump.stenstad.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * alexbligh (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * _robbat2|irssi (nobody@www2.orbis-terrarum.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * tchmnkyz (~jeremy@0001638b.user.oftc.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * joshd (~joshd@38.122.20.226) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * sjust (~sam@38.122.20.226) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * yehudasa__ (~yehudasa@2602:306:330b:1410:84d3:fab1:232b:b7b5) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * josef (~seven@li70-116.members.linode.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * liiwi (liiwi@idle.fi) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * [cave] (~quassel@boxacle.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * Daviey (~DavieyOFT@bootie.daviey.com) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * [fred] (fred@konfuzi.us) Quit (resistance.oftc.net synthon.oftc.net)
[22:25] * Sargun_ (~sargun@208-106-98-2.static.sonic.net) Quit (resistance.oftc.net synthon.oftc.net)
[22:26] <jluis> sagewk, okay, fair enough, but right now it is creating a new key and destroying all the caps
[22:26] <jluis> I refuse to believe that's by design :p
[22:26] * aliguori_ (~anthony@cpe-70-112-157-87.austin.res.rr.com) has joined #ceph
[22:26] * xmltok_ (~xmltok@pool101.bizrate.com) has joined #ceph
[22:26] * KindOne (KindOne@0001a7db.user.oftc.net) has joined #ceph
[22:26] * jeff-YF (~jeffyf@67.23.123.228) has joined #ceph
[22:26] * LeaChim (~LeaChim@0540ae5a.skybroadband.com) has joined #ceph
[22:26] * davidz (~Adium@ip68-5-239-214.oc.oc.cox.net) has joined #ceph
[22:26] * saabylaptop (~saabylapt@1009ds5-oebr.1.fullrate.dk) has joined #ceph
[22:26] * dontalton (~don@rtp-isp-nat1.cisco.com) has joined #ceph
[22:26] * devoid (~devoid@130.202.135.213) has joined #ceph
[22:26] * cjh_ (~cjh@ps123903.dreamhost.com) has joined #ceph
[22:26] * dmick (~dmick@38.122.20.226) has joined #ceph
[22:26] * saml (~sam@adfb12c6.cst.lightpath.net) has joined #ceph
[22:26] * gregaf (~Adium@38.122.20.226) has joined #ceph
[22:26] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:26] * sprachgenerator (~sprachgen@130.202.135.206) has joined #ceph
[22:26] * PerlStalker (~PerlStalk@72.166.192.70) has joined #ceph
[22:26] * Psi-Jack_ (~Psi-Jack@yggdrasil.hostdruids.com) has joined #ceph
[22:26] * diegows (~diegows@190.190.2.126) has joined #ceph
[22:26] * alfredodeza (~alfredode@c-24-131-46-23.hsd1.ga.comcast.net) has joined #ceph
[22:26] * nhorman (~nhorman@hmsreliant.think-freely.org) has joined #ceph
[22:26] * silversurfer (~jeandanie@124x35x46x12.ap124.ftth.ucom.ne.jp) has joined #ceph
[22:26] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[22:26] * lautriv (~lautriv@f050082152.adsl.alicedsl.de) has joined #ceph
[22:26] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[22:26] * Tamil (~tamil@38.122.20.226) has joined #ceph
[22:26] * infernix (nix@5ED33947.cm-7-4a.dynamic.ziggo.nl) has joined #ceph
[22:26] * mjeanson (~mjeanson@00012705.user.oftc.net) has joined #ceph
[22:26] * off_rhoden (~anonymous@pool-173-79-66-35.washdc.fios.verizon.net) has joined #ceph
[22:26] * yehudasa__ (~yehudasa@2602:306:330b:1410:84d3:fab1:232b:b7b5) has joined #ceph
[22:26] * axisys (~axisys@ip68-98-189-233.dc.dc.cox.net) has joined #ceph
[22:26] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[22:26] * josef (~seven@li70-116.members.linode.com) has joined #ceph
[22:26] * yeled (~yeled@spodder.com) has joined #ceph
[22:26] * sjust (~sam@38.122.20.226) has joined #ceph
[22:26] * joshd (~joshd@38.122.20.226) has joined #ceph
[22:26] * tchmnkyz (~jeremy@0001638b.user.oftc.net) has joined #ceph
[22:26] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[22:26] * Kdecherf (~kdecherf@shaolan.kdecherf.com) has joined #ceph
[22:26] * beardo (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[22:26] * chutz (~chutz@rygel.linuxfreak.ca) has joined #ceph
[22:26] * ivan` (~ivan`@000130ca.user.oftc.net) has joined #ceph
[22:26] * jf-jenni (~jf-jenni@stallman.cse.ohio-state.edu) has joined #ceph
[22:26] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[22:26] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[22:26] * Jakdaw (~chris@puma-mxisp.mxtelecom.com) has joined #ceph
[22:26] * nigwil (~idontknow@174.143.209.84) has joined #ceph
[22:26] * cclien_ (~cclien@ec2-50-112-123-234.us-west-2.compute.amazonaws.com) has joined #ceph
[22:26] * lmb (lmb@212.8.204.10) has joined #ceph
[22:26] * portante (~portante@nat-pool-bos-t.redhat.com) has joined #ceph
[22:26] * sbadia (~sbadia@yasaw.net) has joined #ceph
[22:26] * maswan (maswan@kennedy.acc.umu.se) has joined #ceph
[22:26] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[22:26] * jnq (~jon@0001b7cc.user.oftc.net) has joined #ceph
[22:26] * jeroenmoors (~quassel@193.104.8.40) has joined #ceph
[22:26] * guppy (~quassel@guppy.xxx) has joined #ceph
[22:26] * AaronSchulz (~chatzilla@216.38.130.164) has joined #ceph
[22:26] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[22:26] * X3NQ (~X3NQ@195.191.107.205) has joined #ceph
[22:26] * soren (~soren@hydrogen.linux2go.dk) has joined #ceph
[22:26] * Fetch_ (fetch@gimel.cepheid.org) has joined #ceph
[22:26] * Psi-jack (~psi-jack@psi-jack.user.oftc.net) has joined #ceph
[22:26] * lightspeed (~lightspee@fw-carp-wan.ext.lspeed.org) has joined #ceph
[22:26] * _robbat2|irssi (nobody@www2.orbis-terrarum.net) has joined #ceph
[22:26] * alexbligh (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[22:26] * tdb (~tdb@willow.kent.ac.uk) has joined #ceph
[22:26] * NaioN_ (stefan@andor.naion.nl) has joined #ceph
[22:26] * terje_ (~joey@97-118-115-214.hlrn.qwest.net) has joined #ceph
[22:26] * jochen (~jochen@laevar.de) has joined #ceph
[22:26] * \ask (~ask@oz.develooper.com) has joined #ceph
[22:26] * [fred] (fred@konfuzi.us) has joined #ceph
[22:26] * nolan (~nolan@2001:470:1:41:20c:29ff:fe9a:60be) has joined #ceph
[22:26] * baffle_ (baffle@jump.stenstad.net) has joined #ceph
[22:26] * Daviey (~DavieyOFT@bootie.daviey.com) has joined #ceph
[22:26] * rennu_ (sakari@turn.ip.fi) has joined #ceph
[22:26] * liiwi (liiwi@idle.fi) has joined #ceph
[22:26] * Sargun_ (~sargun@208-106-98-2.static.sonic.net) has joined #ceph
[22:26] * nwl (~levine@atticus.yoyo.org) has joined #ceph
[22:26] * codice (~toodles@75-140-71-24.dhcp.lnbh.ca.charter.com) has joined #ceph
[22:26] * nwf (~nwf@67.62.51.95) has joined #ceph
[22:26] * Meyer^ (meyer@c64.org) has joined #ceph
[22:26] * Ormod (~valtha@ohmu.fi) has joined #ceph
[22:26] * [cave] (~quassel@boxacle.net) has joined #ceph
[22:26] <jluis> gotta love netsplits
[22:27] * ChanServ sets mode +v scuttlemonkey
[22:27] <sagewk> yeah that's just sloppy.
[22:28] * dxd828 (~dxd828@host-2-97-70-33.as13285.net) has joined #ceph
[22:28] <sagewk> hopefully nobody relies on that behavior.
[22:28] <sagewk> note it in PendingReleaseNotes
[22:28] * paravoid quotes http://xkcd.com/1172/
[22:29] <jluis> lol
[22:30] <jluis> sagewk, kay, I'll push a patch for a sane approach to 1)
[22:34] <jluis> funny enough, 'ceph auth add client.foo -i keyring' acts as 'auth import', and with that I'm having a hard time arguing against because we don't have any other command to import just one entity (for update purposes or whatever)
[22:35] * mozg (~andrei@host109-151-35-94.range109-151.btcentralplus.com) has joined #ceph
[22:38] * jeff-YF (~jeffyf@67.23.123.228) Quit (Quit: jeff-YF)
[22:38] * nhorman (~nhorman@hmsreliant.think-freely.org) Quit (Quit: Leaving)
[22:38] <jluis> uh, silly me, looks like import does just that
[22:39] * DarkAceZ (~BillyMays@50.107.55.36) Quit (Ping timeout: 480 seconds)
[22:39] * jeff-YF (~jeffyf@216.14.83.26) has joined #ceph
[22:41] * jeff-YF_ (~jeffyf@67.23.123.228) has joined #ceph
[22:43] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:7026:79b0:d727:42d4) Quit (Read error: Connection timed out)
[22:43] * madkiss (~madkiss@2001:6f8:12c3:f00f:3c07:a4d9:23e5:6db3) has joined #ceph
[22:44] * DarkAceZ (~BillyMays@50.107.55.36) has joined #ceph
[22:47] * jeff-YF (~jeffyf@216.14.83.26) Quit (Ping timeout: 480 seconds)
[22:47] * jeff-YF_ is now known as jeff-YF
[22:48] <josef> sagewk: ok 61.7 is building on f19/18/el6
[22:49] <sagewk> awesome! what did you have to fix to make it build?
[22:51] <josef> had to disable tcmalloc
[22:55] <lautriv> how/were may increase the debug-output to see more verbose what disk prepare does resp. is trying to do and fails ?
[22:56] <lautriv> *where
[22:56] * _Tassadar (~tassadar@tassadar.xs4all.nl) Quit (Ping timeout: 480 seconds)
[22:57] <sagewk> lautriv: ceph-disk -v prepare datadev journaldev
[22:59] <lautriv> sagewk, point is, if i do that it outputs like successed but leaves me "unrecognised disklabel", short before the command comes back, the output of parted /device print free is correct.
[22:59] <sagewk> strange... :/
[22:59] <sagewk> oh, you mentinoed this yesterday in teh channel?
[23:00] <sagewk> something about -s 2048?
[23:00] <lautriv> sagewk, also it works fine on any but one box where the difference is just smaller disks which leads me to the assumpition there is some miscalculation.
[23:01] <sagewk> how much smaller
[23:01] <sagewk> ?
[23:02] <lautriv> fail -> 73.4G success ->146G
[23:02] <lautriv> huangjun suggested me to change the logfile in some python,script but the only log that it does is a warning abou not hotswappable if external journal used
[23:03] <sagewk> can you extract the sgdisk commands it is doing to partition and run them manually and see if you get the same broken result?
[23:03] <sagewk> then we can narrow down exactly what we should be telling sgdisk to do?
[23:04] <lautriv> sage i hat that idea but dunno how0.,b
[23:04] <lautriv> i would get exactly that output.
[23:05] * dontalton (~don@rtp-isp-nat1.cisco.com) Quit (Quit: Leaving)
[23:06] <lautriv> sage, there is mentioned that sgdisk does only the primary label and the script will just overwrite the last blocks to remove unmatching backups, i assume there are some wrong sector-counts.
[23:07] <sagewk> the overwriting is just for --zap-disk when clearing the old table
[23:07] <sagewk> ceph-disk zap DEV
[23:07] <sagewk> (the sgdisk zap option does not work as advertised)
[23:08] <lautriv> that works, also disk-prepare works up to the last second, so i would increase even that output to eventually see what sector-numers/sizes are called.
[23:10] <lautriv> example from zap :
[23:11] <lautriv> Disk /dev/sdc: 73.4GB
[23:11] <lautriv> Sector size (logical/physical): 512B/512B
[23:11] <lautriv> Partition Table: gpt
[23:11] <lautriv> Number Start End Size File system Name Flags
[23:11] <lautriv> 17.4kB 73.4GB 73.4GB Free Space
[23:11] <lautriv> example from prepare :
[23:11] <lautriv> Model: SEAGATE SX373307LC (scsi)
[23:11] <lautriv> Disk /dev/sdc: 73.4GB
[23:11] <lautriv> Sector size (logical/physical): 512B/512B
[23:11] <lautriv> Partition Table: gpt
[23:11] <lautriv> Number Start End Size File system Name Flags
[23:11] <lautriv> 17.4kB 1049kB 1031kB Free Space
[23:11] <lautriv> 1 1049kB 73.4GB 73.4GB ceph data
[23:11] <lautriv> like it should ....
[23:12] <lautriv> endresult : unrecognized disklabel :(
[23:17] <lautriv> if i repeat the very same prepare, it boils out with : Could not create partition 1 from 34 to 16777215 , where the last physical sector is 143374704
[23:18] <lautriv> this is close to twice the size (+80% or so)
[23:19] <lautriv> err paste swallowed something ....
[23:24] <lautriv> the requested last sector is actually exactly 2^24-1 ( bits 0 to 23 set )
[23:27] * fireD (~fireD@93-142-241-203.adsl.net.t-com.hr) Quit (Quit: Lost terminal)
[23:28] <lautriv> Warning! Secondary partition table overlaps the last partition by
[23:28] <lautriv> 18446744073566176911 blocks!
[23:28] <lautriv> .... if i calculated right, that's 8.5EiB above
[23:28] * _Tassadar (~tassadar@tassadar.xs4all.nl) has joined #ceph
[23:30] <sagewk> is this a 32-bit machine?
[23:30] <sagewk> what is the seuqence of sgdisk commands?
[23:30] <lautriv> you ask me what.......must actually ssh there to see ;)
[23:31] <lautriv> yep, Xeon 32 bit
[23:31] <sagewk> what's the distro?
[23:32] <lautriv> debian, ceph is 61.7
[23:32] <sagewk> sounds like a buggy sgdisk ....?
[23:32] <sagewk> which debian?
[23:32] <lautriv> i assume that sgdisk does the most culprit but then, why would it work on another box with identical setup ?
[23:33] <lautriv> sid on 3.10.1
[23:33] <sagewk> larger disk, or different sgdisk version, or different arch
[23:33] <sagewk> ?
[23:33] <lautriv> that is, the other box has the 146G
[23:33] * jeff-YF (~jeffyf@67.23.123.228) Quit (Quit: jeff-YF)
[23:34] <lautriv> actually sync to debian-mirror is running, will see what i get after that
[23:35] <lautriv> GPT fdisk (sgdisk) version 0.8.5
[23:49] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:49] * rturk-away is now known as rturk
[23:53] * jeff-YF (~jeffyf@67.23.117.122) has joined #ceph
[23:53] * yanzheng (~zhyan@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[23:57] * jeff-YF (~jeffyf@67.23.117.122) Quit ()

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.