#ceph IRC Log

Index

IRC Log for 2014-06-10

Timestamps are in GMT/BST.

[0:02] * Pedras (~Adium@216.207.42.132) has joined #ceph
[0:03] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[0:04] * nwat (~textual@50.141.87.7) has joined #ceph
[0:09] * nwat (~textual@50.141.87.7) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:10] * fdmanana (~fdmanana@bl10-253-137.dsl.telepac.pt) Quit (Quit: Leaving)
[0:19] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:19] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[0:29] * bandrus1 (~oddo@adsl-71-137-197-172.dsl.scrm01.pacbell.net) Quit (Read error: Connection reset by peer)
[0:30] * bandrus (~oddo@adsl-71-137-197-172.dsl.scrm01.pacbell.net) has joined #ceph
[0:36] * lupu1 (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[0:43] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:43] * Cube (~Cube@12.248.40.138) Quit (Ping timeout: 480 seconds)
[0:43] * aldavud (~aldavud@212.243.10.250) Quit (Read error: Operation timed out)
[0:45] * dmsimard is now known as dmsimard_away
[0:48] * jpuellma (uid32064@id-32064.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[0:48] * aldavud (~aldavud@212.243.10.250) has joined #ceph
[0:49] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[0:53] * yuriw (~Adium@AMarseille-151-1-82-7.w92-150.abo.wanadoo.fr) has joined #ceph
[0:53] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[1:01] * yuriw (~Adium@AMarseille-151-1-82-7.w92-150.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[1:13] * Cube (~Cube@66.87.64.238) has joined #ceph
[1:13] * aldavud (~aldavud@212.243.10.250) Quit (Ping timeout: 480 seconds)
[1:13] * aldavud_ (~aldavud@212.243.10.250) Quit (Ping timeout: 480 seconds)
[1:13] * hitsumabushi is now known as zz_hitsumabushi
[1:17] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) has joined #ceph
[1:18] * oms101 (~oms101@p20030057EA66FF00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:27] * oms101 (~oms101@p20030057EA1CB000EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:29] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[1:30] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit ()
[1:30] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[1:34] * rpowell (~rpowell@128.135.100.108) has joined #ceph
[1:36] * aldavud (~aldavud@212.243.10.250) has joined #ceph
[1:41] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) Quit (Quit: dereky)
[1:42] * joef1 (~Adium@2601:9:2a00:690:6898:a85a:ab89:8120) has joined #ceph
[1:43] * joef (~Adium@2620:79:0:131:7c95:3990:3406:fe04) Quit (Remote host closed the connection)
[1:43] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[1:44] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[1:44] * rpowell (~rpowell@128.135.100.108) Quit (Quit: Leaving.)
[1:45] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) has joined #ceph
[1:45] * aldavud (~aldavud@212.243.10.250) Quit (Ping timeout: 480 seconds)
[1:46] * yanfali_lap (~yanfali@75-101-14-52.static.sonic.net) has joined #ceph
[1:54] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[1:55] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[1:55] * yanfali_lap (~yanfali@75-101-14-52.static.sonic.net) Quit (Quit: yanfali_lap)
[2:08] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:09] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[2:11] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) Quit (Quit: dereky)
[2:14] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[2:14] <jsfrerot> .
[2:15] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:17] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) has joined #ceph
[2:20] * joef1 (~Adium@2601:9:2a00:690:6898:a85a:ab89:8120) Quit (Quit: Leaving.)
[2:22] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:27] * huangjun (~kvirc@111.173.98.164) has joined #ceph
[2:28] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:33] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:35] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:37] * aldavud (~aldavud@212.243.10.250) has joined #ceph
[2:42] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:44] * joef (~Adium@c-67-188-220-98.hsd1.ca.comcast.net) has joined #ceph
[2:45] * aldavud (~aldavud@212.243.10.250) Quit (Ping timeout: 480 seconds)
[2:51] * dmsimard_away is now known as dmsimard
[2:52] * LeaChim (~LeaChim@host86-174-77-240.range86-174.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:55] * yuriw (~Adium@AMarseille-151-1-82-7.w92-150.abo.wanadoo.fr) has joined #ceph
[2:56] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:56] * danieagle (~Daniel@186.214.77.228) has joined #ceph
[2:58] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[3:03] * yuriw (~Adium@AMarseille-151-1-82-7.w92-150.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[3:09] * narb (~Jeff@38.99.52.10) Quit (Quit: narb)
[3:10] * haomaiwang (~haomaiwan@112.193.130.70) has joined #ceph
[3:11] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (Remote host closed the connection)
[3:11] * Pedras (~Adium@216.207.42.132) Quit (Read error: Operation timed out)
[3:12] * haomaiwa_ (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) has joined #ceph
[3:13] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[3:18] * dmsimard is now known as dmsimard_away
[3:18] * haomaiwang (~haomaiwan@112.193.130.70) Quit (Ping timeout: 480 seconds)
[3:28] * yanfali_lap (~yanfali@75-101-14-52.static.sonic.net) has joined #ceph
[3:31] * diegows (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[3:37] * aldavud (~aldavud@212.243.10.250) has joined #ceph
[3:37] * aldavud_ (~aldavud@212.243.10.250) has joined #ceph
[3:41] * mlausch (~mlausch@2001:8d8:1fe:7:4c49:53a:2bb9:6cb0) Quit (Ping timeout: 480 seconds)
[3:45] * aldavud_ (~aldavud@212.243.10.250) Quit (Read error: Operation timed out)
[3:45] * aldavud (~aldavud@212.243.10.250) Quit (Ping timeout: 480 seconds)
[3:46] * yanzheng (~zhyan@134.134.137.75) has joined #ceph
[3:49] * mlausch (~mlausch@2001:8d8:1fe:7:60fa:ea5f:f40f:8817) has joined #ceph
[3:52] * vbellur (~vijay@122.166.181.47) has joined #ceph
[3:54] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[3:55] * joef (~Adium@c-67-188-220-98.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[4:06] <MACscr> what do you guys think of a simple 3 nodes ceph cluster with 2x replica, 3 x 512gb Crucial 550 SSDs (has capacitors) and 3 x 2tb sata hard drives? Would use caching tiering with firefly
[4:08] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:10] <aarontc> I'm struggling to find a concise answer to "How to locate the correct copy of objects from an inconsistent pg and repair the primary OSD's copy". Does anyone have any pointers?
[4:11] * vbellur1 (~vijay@122.167.108.13) has joined #ceph
[4:11] <aarontc> I don't know how to get the checksum out of Ceph or how to tell which objects in a pg are inconsistent
[4:12] * lupu (~lupu@86.107.101.214) has joined #ceph
[4:13] * vbellur (~vijay@122.166.181.47) Quit (Ping timeout: 480 seconds)
[4:16] * zhaochao (~zhaochao@124.205.245.26) has joined #ceph
[4:22] * yanfali_lap (~yanfali@75-101-14-52.static.sonic.net) Quit (Quit: yanfali_lap)
[4:25] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[4:37] * aldavud (~aldavud@212.243.10.250) has joined #ceph
[4:39] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[4:46] * aldavud (~aldavud@212.243.10.250) Quit (Ping timeout: 480 seconds)
[4:48] * skullone (~skullone@shell.skull-tech.com) has joined #ceph
[4:50] <skullone> does ceph support any type of storage compliance, such as write-once, or disabling delete/rm verbs?
[4:52] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[4:57] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[4:58] * yuriw (~Adium@AMarseille-151-1-82-7.w92-150.abo.wanadoo.fr) has joined #ceph
[5:04] * JCL (~JCL@c-24-23-166-139.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[5:04] * JCL (~JCL@2601:9:5980:39b:11be:465b:18d:1c11) has joined #ceph
[5:06] * yuriw (~Adium@AMarseille-151-1-82-7.w92-150.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[5:07] * bandrus (~oddo@adsl-71-137-197-172.dsl.scrm01.pacbell.net) Quit (Read error: Connection reset by peer)
[5:08] * bandrus (~oddo@adsl-71-137-197-172.dsl.scrm01.pacbell.net) has joined #ceph
[5:15] * nwat (~textual@50.141.87.8) has joined #ceph
[5:16] * nwat_ (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[5:17] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[5:18] * Cube (~Cube@66.87.64.238) Quit (Ping timeout: 480 seconds)
[5:22] * nwat (~textual@50.141.87.8) Quit (Read error: Connection reset by peer)
[5:23] * nwat_ (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[5:26] * Cube (~Cube@66.87.64.238) has joined #ceph
[5:27] * haomaiwa_ (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[5:27] * haomaiwang (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) has joined #ceph
[5:29] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[5:31] * Vacum (~vovo@i59F79F0A.versanet.de) has joined #ceph
[5:32] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[5:34] * brytown (~brytown@142-254-47-204.dsl.dynamic.sonic.net) has joined #ceph
[5:34] * brytown (~brytown@142-254-47-204.dsl.dynamic.sonic.net) Quit ()
[5:37] * aldavud (~aldavud@212.243.10.250) has joined #ceph
[5:38] * Vacum_ (~vovo@88.130.205.95) Quit (Ping timeout: 480 seconds)
[5:44] * aldavud (~aldavud@212.243.10.250) Quit (Read error: Operation timed out)
[5:46] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[5:55] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[6:00] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[6:00] * blynch (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[6:00] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[6:11] <aarontc> MACscr: 2x replica is pretty dangerous in terms of data loss potential
[6:12] <MACscr> how so? thats a full copy of everything
[6:12] <MACscr> plus of course normal backup procedures
[6:12] <MACscr> (which i do backups every 4 hours for our critical systems
[6:13] <aarontc> well there have been many threads on the mailing list about probability of data loss... with 2 replicas it's quite high
[6:13] <MACscr> i mean, doing 2x replica isnt that much different than doing a raid1, right?
[6:13] <MACscr> high in general or obviously much higher than 3x
[6:13] <aarontc> right
[6:14] <aarontc> obviously much higher than 3x
[6:14] <aarontc> ""
[6:14] <MACscr> right, but i wouldnt call that high in general
[6:14] <MACscr> its just obviously much safer to have 3 copies versus 2
[6:14] <aarontc> the problem is during recovery - when you lose a disk the increased workload tends to cause others to fail soon after
[6:15] * joef (~Adium@2601:9:2a00:690:fddc:65cd:75ea:1801) has joined #ceph
[6:15] * joef (~Adium@2601:9:2a00:690:fddc:65cd:75ea:1801) Quit ()
[6:15] <MACscr> thats a scary though. Not sure why it would make them fail. should just slow things down
[6:15] <aarontc> (I was in your camp until I lost two drives within 10 minutes of each other... using 2x replica lost me 4TB of data)
[6:15] <MACscr> if ceph is that flakey, it doesnt sound like to great of a solution
[6:15] <aarontc> when I came and asked for help in the channel everyone pretty much said you should never run less than 3
[6:16] <aarontc> well, as I understand the threads on the mailing list, it has nothing to do with ceph
[6:16] <aarontc> just the nature of spinning rust storage
[6:16] <MACscr> of course it does as this isnt a huge concern with raid1 or raid 10
[6:16] <MACscr> with software or hardware raid
[6:16] <MACscr> so has to be something specific to ceph i would think
[6:17] * aarontc shrugs
[6:17] <aarontc> I'm no expert, just telling you what I've heard
[6:17] <MACscr> i appreciate it, sorry if im coming off that way
[6:17] <MACscr> just seems a bit of a waste to lose 2 of 3 disks
[6:18] <MACscr> thats a huge penalty
[6:18] <aarontc> if I had the link I'd send it to you, someone made a nice spreadsheet that let you plug in the number of nodes, number of OSDs, replication level, and how long it takes you to replace a failed disk, and told you your probability of losing data and it compared against RAID
[6:19] <aarontc> it was pretty nifty
[6:20] <MACscr> versus raid? so obviously it is specific to ceph
[6:21] <aarontc> yeah, as I recall ceph was substantially more reliable than RAID5, and after a half dozen or so OSDs better than RAID6
[6:21] * zack_dolby (~textual@e0109-114-22-12-137.uqwimax.jp) has joined #ceph
[6:26] <MACscr> grr, so if i bought 3 x 1TB SSD's. that only gives me 1TB of usable space. hmm
[6:27] <aarontc> well ,if you're looking at such a small cluster, I'm not sure if ceph is really what you need/want... you also have to keep plenty of space free for overhead and things like rebalancing when hardware fails
[6:27] <aarontc> (as in, aim for less than 85% utilization, or so)
[6:27] <MACscr> im trying to save some space and power in my rack. I have 4 storage servers. 2 servers with 12 x 300GB 15k SAS and 2 servers with 6 x 300GB 15k SAS
[6:28] * sleinen1 (~Adium@2001:620:0:26:1120:c6c8:6826:b854) Quit (Quit: Leaving.)
[6:29] <MACscr> i dont need a lot of space (i could get by with 1tb to start to be honest), but i need the shared storage and redundancy of ceph rbd
[6:29] <beardo_> MACscr, while a little old, you might find this interesting with regard to disk failures during rebuilds, both with RAID and ceph: http://www.zdnet.com/blog/storage/why-raid-5-stops-working-in-2009/162
[6:30] <MACscr> beardo: well raid 5 does suck. its not something i use
[6:30] <MACscr> i only use raid1 and raid10
[6:30] <MACscr> a replica of 2 is pretty much raid 10
[6:31] <beardo_> not exactly. The data isn't necessarily stored only on two OSDs
[6:31] <beardo_> data is striped by librbd across multiple objects
[6:31] <beardo_> which are hosted on multiple OSDs
[6:32] <beardo_> but that doesn't mean a a 1MB file is stored exactly on two drives, as it would be in RAID 1
[6:34] <beardo_> assuming it's written in the ceph standard 4KB chunks, it could be written on as many as 512 OSDs with two replicas
[6:35] <MACscr> beardo: hmm, ok. poop
[6:35] <beardo_> at least based on my reading of http://ceph.com/docs/master/architecture/
[6:35] <aarontc> (I believe the default block size is 4MB, not 4kB, beardo_)
[6:36] <MACscr> so ok, so what do you think of my simple 6 disk idea though even with a replica of 3?
[6:36] <beardo_> ah, right
[6:36] <beardo_> same principle
[6:36] <MACscr> trying to keep costs down, but with some speed still. I am limited to sata 2 though. =(
[6:36] <MACscr> i do have 10GB network though
[6:37] * aldavud (~aldavud@212.243.10.250) has joined #ceph
[6:37] <beardo_> one node?
[6:38] <MACscr> 3 storage nodes, simple 3 nodes ceph cluster with 4x replica, 3 x 1tb Crucial 550 SSDs (has capacitors) and 3 x 2tb sata hard drives. Would use caching tiering with firefly
[6:41] <beardo_> haven't played with cache tiering yet, but it sounds do able
[6:41] <beardo_> you might also want to try erasure coding on the hard drives
[6:41] <MACscr> nah, performance hit
[6:42] <beardo_> haven't seen it in my testing yet
[6:42] <iggy> 4 replicas on 3 nodes?
[6:42] <beardo_> and you have a ton of fast disk in front
[6:42] <MACscr> lol, whoops, meant 3
[6:42] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Ping timeout: 480 seconds)
[6:42] <MACscr> id like to get the 1tb 840 evo's, but seems to risky without capacitors
[6:43] <beardo_> unless all of your clients are connected at 10Gb end-to-end, then network is likely to be your bottleneck anyway
[6:43] * ultimape (~Ultimape@c-174-62-192-41.hsd1.vt.comcast.net) has joined #ceph
[6:43] <beardo_> s/then/the/
[6:43] <kraken> beardo_ meant to say: unless all of your clients are connected at 10Gb end-to-end, the network is likely to be your bottleneck anyway
[6:44] <MACscr> beardo_: not all, but with 3 storage servers and 6 kvm hosts, i think they will be fine (4 x 1gb each host minimum)
[6:44] * lucas1 (~Thunderbi@218.76.25.66) Quit (Ping timeout: 480 seconds)
[6:44] * aldavud (~aldavud@212.243.10.250) Quit (Read error: Operation timed out)
[6:47] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[6:48] <beardo_> MACscr, it may also be worth looking at getting more, smaller SSDs
[6:49] <beardo_> so the read/writes get spread out a but more
[6:51] <MACscr> hmm, even if i cut the disks in half and bought twice as many, it would cost about 20% more =/
[6:51] <MACscr> and obviously then limit my expansion options
[6:51] <MACscr> though probably something i should consider
[6:51] <iggy> that seems odd, based on what I've seen of SSD prices
[6:51] <MACscr> iggy: why? the larger you go, the cheaper it is per gb
[6:52] <MACscr> im not talkign about pure enterprise drives though, more the middle ground ones that still have capacitors
[6:52] <iggy> that hasn't been what I've seen
[6:52] <ultimape> when i run "ceph-deploy admin client-machine", where on client-machine does it put the admin keyring file?
[6:53] <MACscr> like the crucial 550 1TB are about $490 and about $290 for the 512GB ones
[6:54] <iggy> I guess it's been a couple weeks since I've looked at SSD prices
[6:54] * drankis_ (~drankis__@37.148.173.239) has joined #ceph
[6:55] * Cube (~Cube@66.87.64.238) Quit (Ping timeout: 480 seconds)
[6:55] <MACscr> iggy: you doing any storage tiering?
[6:55] <iggy> lol, no
[6:56] <iggy> if I was going to go to that extent, I'd just go ALL SSD
[6:56] <beardo_> MACscr, http://www.provantage.com/crucial-technology-ct512mx100ssd1~7CIAS01W.htm
[6:57] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[6:57] * rdas (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[6:58] * sleinen1 (~Adium@2001:620:0:26:ccb9:6ffb:a416:f2ca) has joined #ceph
[6:58] <MACscr> beardo_: thanks!
[6:58] <MACscr> i wasnt even aware of that model
[6:58] <beardo_> no problem
[7:00] * yuriw (~Adium@AMarseille-151-1-82-7.w92-150.abo.wanadoo.fr) has joined #ceph
[7:01] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:02] <MACscr> ah, crap. looks like Crucials firmware suffers from a lack of ssd heal reporting
[7:02] <MACscr> hmm
[7:02] <MACscr> that could be a big problem. guess i need to read more about htat
[7:02] <MACscr> that
[7:05] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[7:08] * yuriw (~Adium@AMarseille-151-1-82-7.w92-150.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[7:09] <MACscr> beardo_: http://www.bhphotovideo.com/bnh/controller/home?O=&sku=1053614&gclid=CjkKEQjwttWcBRCuhYjhouveusIBEiQAwjy8IGHmJJYplJDPZZQhcr-oiUYyI4KsfseSK91O4rogsFXw_wcB&is=REG&Q=&A=details
[7:09] <MACscr> wow, $200!
[7:09] * Cube (~Cube@66-87-131-223.pools.spcsdns.net) has joined #ceph
[7:09] * vbellur1 (~vijay@122.167.108.13) Quit (Read error: Operation timed out)
[7:09] <ultimape> Where are the source files for he documentation on http://ceph.com/docs/master/start/quick-rbd/ ? I have a couple of small adjustments that I would love to put in a pull request for.
[7:10] <dmick> ultimape: in the ceph repo
[7:11] <aarontc> What's the longevity like on crucial vs intel and samsung?
[7:11] <MACscr> not great at least according to their claims, about 72TB
[7:12] <MACscr> but ive been reading that a lot of them typically go about 3 times what is claimed
[7:12] <aarontc> hmm, that's about the same as samsung claims
[7:12] <ultimape> hmm, I didn't realize github supports .rst files
[7:13] <aarontc> I have about 100 samsungs deployed with no failures yet, but it's only been a year. almost 300 intel 320 and 520 series going on 4 years
[7:13] * Azendale (~erik@216.7.125.200) has joined #ceph
[7:13] <ultimape> dmick: thanks!
[7:13] <dmick> np
[7:13] <MACscr> which samsungs? pro?
[7:13] <aarontc> EVO
[7:15] <MACscr> now as i mentioned, i only have sata 2. That should only really limit total transfer speed (aka, wont be able to hit the 500MB/s speeds), but that shouldnt really affect the random access speeds, right?
[7:16] * Azendale (~erik@216.7.125.200) has left #ceph
[7:16] <aarontc> correct
[7:19] <MACscr> crap, so tempting. They even have bill me later at that store
[7:25] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[7:27] * yuriw (~Adium@AMarseille-151-1-82-7.w92-150.abo.wanadoo.fr) has joined #ceph
[7:30] <ultimape> welp, first pull request is documentation. Feels good.
[7:32] * saurabh (~saurabh@nat-pool-blr-t.redhat.com) has joined #ceph
[7:35] * yanfali_lap (~yanfali@75-101-14-52.static.sonic.net) has joined #ceph
[7:35] * yuriw (~Adium@AMarseille-151-1-82-7.w92-150.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[7:36] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[7:36] * angdraug (~angdraug@63.142.161.6) has joined #ceph
[7:37] * aldavud (~aldavud@212.243.10.250) has joined #ceph
[7:44] <ultimape> MACscr, the max transfer will be ~256mb/s, that crucial mx110 is still going to saturate a sata2. maybe you can get cheaper unless you can plan to put it into better hardware down the road. [ http://www.storagereview.com/crucial_mx100_ssd_review ]
[7:44] * vbellur (~vijay@209.132.188.8) has joined #ceph
[7:44] <ultimape> it clocks in at ~280mb/s random writes
[7:45] * aldavud (~aldavud@212.243.10.250) Quit (Ping timeout: 480 seconds)
[7:47] <MACscr> ultimape: where are you seeing that low of a speed? sure you arent looking at the 256gb model?
[7:48] * erice (~erice@50.240.86.181) Quit (Ping timeout: 480 seconds)
[7:49] * wogri (~wolf@nix.wogri.at) Quit (Remote host closed the connection)
[7:50] <ultimape> my ocz vertex 3 maxes out my sata II bus on seqential, but not random, the theoretical limit is 300MB/s, but its normally 265
[7:50] * wogri (~wolf@nix.wogri.at) has joined #ceph
[7:54] * ikrstic (~ikrstic@178-221-100-206.dynamic.isp.telekom.rs) has joined #ceph
[7:56] * zerick (~eocrospom@190.118.32.106) has joined #ceph
[7:58] <ultimape> what would I want to look into if I want to setup a co-location thing for a ceph cluster?
[7:59] <ultimape> is that all just in the rule configuration? (assuming I have a connection between sites already)
[8:00] <dmick> you mean georeplication?
[8:01] <ultimape> Yeah
[8:02] * yuriw (~Adium@AMarseille-151-1-82-7.w92-150.abo.wanadoo.fr) has joined #ceph
[8:02] * madkiss (~madkiss@2001:6f8:12c3:f00f:f53b:3925:2e33:3992) has joined #ceph
[8:02] <dmick> the real support for that at the moment is with the S3 gateway
[8:03] <dmick> Ceph proper doesn't necessarily work well on long-latency links
[8:04] <ultimape> How about 2 datacenters across town? I think there is a 100mb line connecting them.
[8:04] <ultimape> There is going to be a fiber connection, but I don't know when that is going in.
[8:05] <ultimape> or I suppose more immediately, setting up a bunch of nodes in my basement, then having a backup in my attack for if my basement flods.
[8:11] * zack_dolby (~textual@e0109-114-22-12-137.uqwimax.jp) Quit (Read error: Connection reset by peer)
[8:12] * zack_dolby (~textual@e0109-114-22-12-137.uqwimax.jp) has joined #ceph
[8:16] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[8:17] * sleinen1 (~Adium@2001:620:0:26:ccb9:6ffb:a416:f2ca) Quit (Ping timeout: 480 seconds)
[8:17] * rdas (~rdas@nat-pool-pnq-t.redhat.com) Quit (Quit: Leaving)
[8:20] <absynth> ultimape: forget it
[8:21] <ultimape> :sadface:
[8:21] <absynth> or rather: ask in a year
[8:21] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[8:23] * vbellur (~vijay@209.132.188.8) Quit (Ping timeout: 480 seconds)
[8:30] * sleinen (~Adium@macsl.switch.ch) has joined #ceph
[8:30] * rdas (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[8:30] <ultimape> speaking of asking later, is there a kanban or roadmap I can look at?
[8:31] * sleinen1 (~Adium@2001:620:0:26:dcd3:c457:1fa3:9297) has joined #ceph
[8:32] * imriz (~imriz@82.81.163.130) has joined #ceph
[8:33] * vbellur (~vijay@nat-pool-blr-t.redhat.com) has joined #ceph
[8:33] <absynth> very high-level: http://www.inktank.com/enterprise/roadmap/
[8:33] <absynth> very short-term: http://tracker.ceph.com/projects/ceph/roadmap
[8:37] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[8:38] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[8:38] * rdas (~rdas@nat-pool-pnq-t.redhat.com) Quit (Quit: Leaving)
[8:38] * sleinen (~Adium@macsl.switch.ch) Quit (Ping timeout: 480 seconds)
[8:42] <ultimape> hmm, so it looks like i'll be able to do something like that using erasure pools? is that what you are talking about waiting for?
[8:43] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Remote host closed the connection)
[8:46] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[8:48] <absynth> no, i am talking about waiting until it's reliably implemented
[8:48] <absynth> latency is a huge issue for ceph
[8:53] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[8:56] * evl (~chatzilla@139.216.138.39) has joined #ceph
[8:57] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[8:57] * yanfali_lap (~yanfali@75-101-14-52.static.sonic.net) Quit (Quit: yanfali_lap)
[8:58] * rendar (~I@host138-179-dynamic.12-79-r.retail.telecomitalia.it) has joined #ceph
[8:59] * thomnico (~thomnico@2a01:e35:8b41:120:213f:c6e2:f791:8d5e) has joined #ceph
[9:00] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:03] * madkiss (~madkiss@2001:6f8:12c3:f00f:f53b:3925:2e33:3992) Quit (Ping timeout: 480 seconds)
[9:06] * sep (~sep@2a04:2740:1:0:52e5:49ff:feeb:32) has joined #ceph
[9:06] * madkiss (~madkiss@2001:6f8:12c3:f00f:a130:4605:4ee7:1ec4) has joined #ceph
[9:06] * yuriw (~Adium@AMarseille-151-1-82-7.w92-150.abo.wanadoo.fr) Quit (Quit: Leaving.)
[9:07] * rdas (~rdas@nat-pool-pnq-t.redhat.com) has joined #ceph
[9:10] * aldavud (~aldavud@213.55.176.188) has joined #ceph
[9:10] * aldavud_ (~aldavud@213.55.176.188) has joined #ceph
[9:19] * angdraug (~angdraug@63.142.161.6) Quit (Quit: Leaving)
[9:20] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[9:20] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[9:22] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[9:23] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[9:24] * kapil (~ksharma@2620:113:80c0:5::2222) Quit (Remote host closed the connection)
[9:26] * analbeard (~shw@support.memset.com) has joined #ceph
[9:26] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit ()
[9:30] * kapil (~ksharma@2620:113:80c0:5::2222) has joined #ceph
[9:31] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Ping timeout: 480 seconds)
[9:31] * evl (~chatzilla@139.216.138.39) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 29.0.1/20140514131124])
[9:33] * kwaegema (~kwaegema@daenerys.ugent.be) has joined #ceph
[9:34] * sleinen (~Adium@2001:620:0:26:2ded:af6f:cbf2:27bd) has joined #ceph
[9:35] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[9:36] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[9:37] * sleinen1 (~Adium@2001:620:0:26:dcd3:c457:1fa3:9297) Quit (Ping timeout: 480 seconds)
[9:40] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[9:42] * Infitialis (~infitiali@194.30.182.18) has joined #ceph
[9:45] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:48] <ultimape> the cephFS isn't recommended for production data, but is it safe to play around with it on the same ceph cluster hosting a block-device?
[9:48] <ultimape> my instinct says yes, but thought there might be some gotcha's
[9:49] <absynth> wouldn't risk it
[9:49] <absynth> if the rbd stuff is productive
[9:49] <absynth> if it's a testing environment, knock yourself out
[9:59] * aldavud (~aldavud@213.55.176.188) Quit (Ping timeout: 480 seconds)
[9:59] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:59] * aldavud_ (~aldavud@213.55.176.188) Quit (Ping timeout: 480 seconds)
[9:59] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit ()
[9:59] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[10:05] * Infitialis (~infitiali@194.30.182.18) Quit (Remote host closed the connection)
[10:05] * Infitialis (~infitiali@194.30.182.18) has joined #ceph
[10:06] * mdjp (~mdjp@2001:41d0:52:100::343) Quit (Quit: mdjp has quit)
[10:06] * mdjp- (~mdjp@2001:41d0:52:100::343) Quit (Quit: mdjp has quit)
[10:07] * mdjp (~mdjp@2001:41d0:52:100::343) has joined #ceph
[10:07] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:17] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[10:19] * fdmanana (~fdmanana@bl10-142-30.dsl.telepac.pt) has joined #ceph
[10:19] * dneary (~dneary@87-231-145-225.rev.numericable.fr) has joined #ceph
[10:21] * midekra (~dennis@ariel.xs4all.nl) has joined #ceph
[10:21] * jpierre03 (~jpierre03@5275675.test.dnsbl.oftc.net) Quit (Remote host closed the connection)
[10:21] * jpierre03_ (~jpierre03@5275675.test.dnsbl.oftc.net) Quit (Read error: Connection reset by peer)
[10:22] * zack_dol_ (~textual@pw126205137242.3.panda-world.ne.jp) has joined #ceph
[10:22] * zack_dolby (~textual@e0109-114-22-12-137.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[10:23] * jpierre03 (~jpierre03@voyage.prunetwork.fr) has joined #ceph
[10:24] * jpierre03_ (~jpierre03@voyage.prunetwork.fr) has joined #ceph
[10:24] * ksingh (~Adium@2001:708:10:10:5817:3fe:2394:fe12) has joined #ceph
[10:25] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[10:26] * lucas1 (~Thunderbi@222.240.148.154) has joined #ceph
[10:28] * Cube1 (~Cube@66.87.131.109) has joined #ceph
[10:29] * Cube1 (~Cube@66.87.131.109) Quit ()
[10:33] * LeaChim (~LeaChim@host86-174-77-240.range86-174.btcentralplus.com) has joined #ceph
[10:34] * Cube (~Cube@66-87-131-223.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[10:41] * zack_dol_ (~textual@pw126205137242.3.panda-world.ne.jp) Quit (Read error: Connection reset by peer)
[10:41] * zack_dolby (~textual@e0109-114-22-12-137.uqwimax.jp) has joined #ceph
[10:48] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has joined #ceph
[10:48] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has left #ceph
[10:57] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[10:57] * sleinen1 (~Adium@macsl.switch.ch) has joined #ceph
[10:58] * sleinen (~Adium@2001:620:0:26:2ded:af6f:cbf2:27bd) Quit (Ping timeout: 480 seconds)
[11:01] * jpierre03 (~jpierre03@voyage.prunetwork.fr) Quit (Ping timeout: 480 seconds)
[11:01] * jpierre03_ (~jpierre03@voyage.prunetwork.fr) Quit (Ping timeout: 480 seconds)
[11:02] <MACscr> 32mb vs 64 cache on a spindle drive isnt really going to make a difference when it comes to ceph, right?
[11:04] * allsystemsarego (~allsystem@188.27.188.69) has joined #ceph
[11:04] * aldavud (~aldavud@213.55.176.188) has joined #ceph
[11:04] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[11:04] * aldavud_ (~aldavud@213.55.176.188) has joined #ceph
[11:05] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[11:05] * jpierre03 (~jpierre03@voyage.prunetwork.fr) has joined #ceph
[11:05] * yanzheng (~zhyan@134.134.137.75) Quit (Remote host closed the connection)
[11:06] * jpierre03_ (~jpierre03@voyage.prunetwork.fr) has joined #ceph
[11:07] <absynth> MACscr: you'll do the bulk of your caching in the controller, not the drive, so: no
[11:07] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[11:08] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[11:10] * lucas1 (~Thunderbi@222.240.148.154) Quit (Quit: lucas1)
[11:11] * madkiss1 (~madkiss@chello084112124211.20.11.vie.surfer.at) has joined #ceph
[11:11] * fghaas1 (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has joined #ceph
[11:12] * ade (~abradshaw@dslb-088-074-025-185.pools.arcor-ip.net) has joined #ceph
[11:14] * lucas1 (~Thunderbi@222.240.148.154) has joined #ceph
[11:15] * madkiss (~madkiss@2001:6f8:12c3:f00f:a130:4605:4ee7:1ec4) Quit (Ping timeout: 480 seconds)
[11:17] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[11:19] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Quit: Leaving)
[11:29] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[11:31] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[11:31] * zidarsk8 (~zidar@88.200.36.116) has joined #ceph
[11:33] * aldavud (~aldavud@213.55.176.188) Quit (Ping timeout: 480 seconds)
[11:33] * aldavud_ (~aldavud@213.55.176.188) Quit (Ping timeout: 480 seconds)
[11:43] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[11:43] * zidarsk8 (~zidar@88.200.36.116) Quit (Read error: Connection reset by peer)
[11:50] * jeremy___s (~jeremy__s@AStDenis-552-1-167-139.w80-8.abo.wanadoo.fr) has joined #ceph
[11:53] <ingard> hi guys. anyone around ?
[11:55] * fghaas1 (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has left #ceph
[11:56] <absynth> depends
[11:56] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:01] * ade (~abradshaw@dslb-088-074-025-185.pools.arcor-ip.net) Quit (Remote host closed the connection)
[12:02] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[12:03] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Read error: Operation timed out)
[12:04] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[12:04] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[12:10] * ikrstic (~ikrstic@178-221-100-206.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[12:12] * admin (~chatzilla@46-126-224-128.dynamic.hispeed.ch) has joined #ceph
[12:13] * admin (~chatzilla@46-126-224-128.dynamic.hispeed.ch) Quit ()
[12:13] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[12:14] * admin (~chumbri@46-126-224-128.dynamic.hispeed.ch) has joined #ceph
[12:15] * admin (~chumbri@46-126-224-128.dynamic.hispeed.ch) has left #ceph
[12:16] * isodude (~isodude@kungsbacka.oderland.com) Quit (Remote host closed the connection)
[12:16] * chumbri (~chumbri@46-126-224-128.dynamic.hispeed.ch) has joined #ceph
[12:22] <kfei> I currently have 3 monitors with 0 OSDs, and I want to change the cluster's public network, should I log in to every monitor and edit `/etc/ceph/ceph.conf` then restart each?
[12:23] <kfei> or tools like ceph-deploy can help?
[12:23] <ingard> absynth: i need help with the graphite part of calamari :)
[12:24] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has joined #ceph
[12:29] * chumbri (~chumbri@46-126-224-128.dynamic.hispeed.ch) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 26.0/20131205075310])
[12:34] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[12:37] * vbellur (~vijay@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[12:40] * Clbh (~benoit@cyllene.anchor.net.au) Quit (Ping timeout: 480 seconds)
[12:42] * Clbh (~benoit@cyllene.anchor.net.au) has joined #ceph
[12:44] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) has joined #ceph
[12:45] * odyssey4me (~odyssey4m@165.233.71.2) has joined #ceph
[12:46] * sleinen1 (~Adium@macsl.switch.ch) Quit (Ping timeout: 480 seconds)
[12:47] * Georgyo (~georgyo@shamm.as) Quit (Ping timeout: 480 seconds)
[12:47] * Georgyo (~georgyo@shamm.as) has joined #ceph
[12:49] * vbellur (~vijay@209.132.188.8) has joined #ceph
[12:49] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[12:57] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[12:57] * ade (~abradshaw@dslb-088-074-025-185.pools.arcor-ip.net) has joined #ceph
[12:58] * lucas1 (~Thunderbi@222.240.148.154) Quit (Ping timeout: 480 seconds)
[13:00] * fdmanana (~fdmanana@bl10-142-30.dsl.telepac.pt) Quit (Quit: Leaving)
[13:01] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:02] * rwheeler (~rwheeler@173.48.207.57) has joined #ceph
[13:05] * lalatenduM (~lalatendu@nat-pool-blr-t.redhat.com) Quit (Quit: Leaving)
[13:06] * adam1 (~adam@2001:8b0:281:78ec:e2cb:4eff:fe01:f767) has joined #ceph
[13:09] * adam1 is now known as verdurin
[13:12] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[13:13] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit (Quit: koleosfuscus)
[13:13] * chumbri (~oftc-webi@62.12.129.162) has joined #ceph
[13:15] * saurabh (~saurabh@nat-pool-blr-t.redhat.com) Quit (Ping timeout: 480 seconds)
[13:16] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[13:17] * dmsimard_away is now known as dmsimard
[13:20] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[13:21] * The_Bishop_ (~bishop@f055051063.adsl.alicedsl.de) has joined #ceph
[13:26] <ultimape> I'm getting a "reached concerning levels of available space on local monitor storage". any idea how big my OS disk needs to be?
[13:27] <ultimape> or how to calculate monitor disk usage?
[13:27] * zlem (~zlem@46-246-111-106.vps.gridlane.net) has left #ceph
[13:28] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[13:29] * The_Bishop__ (~bishop@f055014054.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[13:31] * huangjun (~kvirc@111.173.98.164) Quit (Ping timeout: 480 seconds)
[13:32] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[13:33] <chumbri> Hi guys, I tried ceph with CentOS 6.5 (Quickstart) and run into some missing directory and file permission problems (which I was able to solve). Just wanted to know if there might be a better ceph experience with an Ubuntu operating system? Or is there a recommendation on which OS ceph works best?
[13:33] * zack_dolby (~textual@e0109-114-22-12-137.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[13:35] * leseb (~leseb@185.21.174.206) has joined #ceph
[13:35] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[13:38] <valtha> Anyone aware of anything similar to rbd-fuse but for just plain rados? (I actually just care about reading the objects so it could be dirt simple for my purposes)
[13:39] * valtha is now known as Ormod
[13:41] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:42] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) Quit (Quit: Konversation terminated!)
[13:45] * rwheeler (~rwheeler@173.48.207.57) Quit (Quit: Leaving)
[13:52] <joao> Ormod, librados?
[13:53] * ganders (~root@200-127-158-54.net.prima.net.ar) has joined #ceph
[13:57] * ajazdzewski (~quassel@lpz-66.sprd.net) has joined #ceph
[13:59] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[13:59] * scuttlemonkey (~scuttlemo@63.138.96.2) Quit (Read error: Operation timed out)
[14:01] * lucas1 (~Thunderbi@222.247.57.50) Quit (Quit: lucas1)
[14:01] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Quit: Leaving.)
[14:02] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:02] * vbellur (~vijay@209.132.188.8) Quit (Ping timeout: 480 seconds)
[14:02] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[14:02] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[14:04] * zhaochao (~zhaochao@124.205.245.26) has left #ceph
[14:06] * lucas1 (~Thunderbi@222.247.57.50) Quit ()
[14:16] * lalatenduM (~lalatendu@122.167.40.237) has joined #ceph
[14:17] <alfredodeza> kfei: ceph-deploy could help there for the pushing of the ceph.conf file
[14:17] * lalatenduM (~lalatendu@122.167.40.237) Quit ()
[14:17] * lalatenduM (~lalatendu@122.167.40.237) has joined #ceph
[14:17] * koleosfuscus (~koleosfus@ws116-110.unine.ch) has joined #ceph
[14:25] * vbellur (~vijay@122.167.108.13) has joined #ceph
[14:30] * koleosfuscus (~koleosfus@ws116-110.unine.ch) Quit (Quit: koleosfuscus)
[14:31] * koleosfuscus (~koleosfus@ws116-110.unine.ch) has joined #ceph
[14:35] <Amto_res> Hello, I try to visualier actions performed on a bucket via the command: "radosgw-admin log show --bucket-id=rbd --date=2014-06-10" But this returns me error like: "Specify an object or a date, bucket and bucket-id "have you any idea?
[14:35] <Amto_res> I specify the date well and bucket yet ..
[14:42] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[14:43] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[14:49] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) has joined #ceph
[14:49] * dereky (~derek@pool-71-114-104-38.washdc.fios.verizon.net) Quit (Quit: dereky)
[14:53] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[14:55] * scuttlemonkey (~scuttlemo@63.138.96.2) has joined #ceph
[14:55] * ChanServ sets mode +o scuttlemonkey
[15:03] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[15:10] * koleosfuscus (~koleosfus@ws116-110.unine.ch) Quit (Quit: koleosfuscus)
[15:11] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[15:14] * dereky (~derek@proxy00.umiacs.umd.edu) has joined #ceph
[15:21] * markbby (~Adium@168.94.245.1) has joined #ceph
[15:22] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[15:27] * ikrstic (~ikrstic@178-221-100-206.dynamic.isp.telekom.rs) has joined #ceph
[15:30] * koleosfuscus (~koleosfus@ws116-110.unine.ch) has joined #ceph
[15:36] * danieagle (~Daniel@186.214.77.228) Quit (Ping timeout: 480 seconds)
[15:43] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:45] * danieagle (~Daniel@186.214.48.173) has joined #ceph
[15:47] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[15:51] * koleosfuscus (~koleosfus@ws116-110.unine.ch) Quit (Quit: koleosfuscus)
[15:55] * huangjun (~kvirc@117.151.48.214) has joined #ceph
[16:08] * koleosfuscus (~koleosfus@ws11-189.unine.ch) has joined #ceph
[16:10] * koleosfuscus (~koleosfus@ws11-189.unine.ch) Quit ()
[16:10] * ajazdzewski (~quassel@lpz-66.sprd.net) Quit (Remote host closed the connection)
[16:11] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[16:12] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[16:14] * Cube (~Cube@66.87.130.3) has joined #ceph
[16:18] * rpowell (~rpowell@128.135.219.215) has joined #ceph
[16:19] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:20] * thomnico (~thomnico@2a01:e35:8b41:120:213f:c6e2:f791:8d5e) Quit (Quit: Ex-Chat)
[16:21] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[16:22] * rpowell (~rpowell@128.135.219.215) has left #ceph
[16:22] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[16:24] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:30] * ade (~abradshaw@dslb-088-074-025-185.pools.arcor-ip.net) Quit (Quit: Too sexy for his shirt)
[16:30] * zidarsk8 (~zidar@2001:1470:fffe:fe01:e2ca:94ff:fe34:7822) has joined #ceph
[16:30] * zidarsk8 (~zidar@2001:1470:fffe:fe01:e2ca:94ff:fe34:7822) has left #ceph
[16:31] * koleosfuscus (~koleosfus@ws116-110.unine.ch) has joined #ceph
[16:31] * ade (~abradshaw@dslb-088-074-025-185.pools.arcor-ip.net) has joined #ceph
[16:35] * ade (~abradshaw@dslb-088-074-025-185.pools.arcor-ip.net) Quit ()
[16:38] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[16:45] * imriz (~imriz@82.81.163.130) Quit (Ping timeout: 480 seconds)
[16:50] * scuttlemonkey (~scuttlemo@63.138.96.2) Quit (Remote host closed the connection)
[16:54] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[16:56] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[17:00] * al (d@niel.cx) Quit (Remote host closed the connection)
[17:04] * al (d@niel.cx) has joined #ceph
[17:06] * ksingh (~Adium@2001:708:10:10:5817:3fe:2394:fe12) has left #ceph
[17:07] * rturk|afk is now known as rturk
[17:09] * wschulze1 (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[17:09] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[17:09] * imriz (~imriz@82.81.163.130) has joined #ceph
[17:11] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:12] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:14] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[17:15] * rturk is now known as rturk|afk
[17:17] * sarob (~sarob@2601:9:1d00:c7f:8c05:c111:f0cc:406a) has joined #ceph
[17:19] * thomnico (~thomnico@2a01:e35:8b41:120:213f:c6e2:f791:8d5e) has joined #ceph
[17:22] * jrankin (~jrankin@d47-69-66-231.try.wideopenwest.com) has joined #ceph
[17:22] * koleosfuscus (~koleosfus@ws116-110.unine.ch) Quit (Quit: koleosfuscus)
[17:25] * Infitialis (~infitiali@194.30.182.18) Quit (Ping timeout: 480 seconds)
[17:27] * mtanski (~mtanski@65.107.210.227) has joined #ceph
[17:28] * mtanski (~mtanski@65.107.210.227) Quit ()
[17:28] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:30] * morse (~morse@supercomputing.univpm.it) Quit (Remote host closed the connection)
[17:30] * narb (~Jeff@38.99.52.10) has joined #ceph
[17:31] * yanfali_lap (~yanfali@75-101-14-52.static.sonic.net) has joined #ceph
[17:33] * koleosfuscus (~koleosfus@ws116-110.unine.ch) has joined #ceph
[17:34] * rdas (~rdas@nat-pool-pnq-t.redhat.com) Quit (Quit: Leaving)
[17:40] * kwaegema (~kwaegema@daenerys.ugent.be) Quit (Ping timeout: 480 seconds)
[17:40] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[17:40] <alphe> hello everyone
[17:40] <alphe> I want to know how clonning of rbd image works
[17:42] * yanfali_lap (~yanfali@75-101-14-52.static.sonic.net) Quit (Quit: yanfali_lap)
[17:42] <alphe> the way i understand it is that I do an initial rbd image call it mydisk with format 2 I do a snapshot of it then I clone that rbd image into clientedisk1
[17:43] <alphe> then I can mount clientedisk1 to my client proxy and write in it without affecting the initial image right ?
[17:43] * koleosfuscus (~koleosfus@ws116-110.unine.ch) Quit (Quit: koleosfuscus)
[17:46] <JCL> alphe: Let's say you have an RBD image = test You take a snapshot of test, you protect the snapshot so it can't be deleted, then you clone your snapshot. You can then map the clone where you need and read from or write to it.
[17:47] <JCL> rbd snap create
[17:47] <JCL> rbd snap protect
[17:47] <JCL> rbd clone
[17:47] <JCL> That's the 3 steps you must follow
[17:48] <alphe> yes jcl but my asks are does the data i insert to the clone snapshot is inserted in the parent disk ?
[17:48] <alphe> can i mount to the same proxy several disks ?
[17:48] <JCL> Answer is NO
[17:48] <alphe> last ask if i delete the clone are the replicas gone too ?
[17:48] <JCL> To the data going to the parent image
[17:49] <JCL> If you delete the clone, you only delete the clone data
[17:49] <JCL> Not the original RBD image, not even the snapshot that you protected
[17:49] <alphe> ok perfect that is the way i imagined it to work
[17:49] <alphe> thank you JCL
[17:50] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[17:50] <JCL> And for the mount question. Nothing prevents you from doing it. But then comes the problem of consistency if myltiple people write to it.
[17:50] <JCL> So not recommended
[17:50] * sverrest_ (~sverrest@cm-84.208.166.184.getinternet.no) Quit (Read error: Connection reset by peer)
[17:51] * koleosfuscus (~koleosfus@ws116-110.unine.ch) has joined #ceph
[17:51] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[17:51] <alphe> jcl I mean on a single proxy i mount clone1 clone2 clone3 and not on proxy1 i mount clone1 on proxy2 i mount clone1 on proxy3 i mount clone1
[17:53] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[17:54] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[17:55] * sverrest (~sverrest@cm-84.208.166.184.getinternet.no) has joined #ceph
[17:57] <JCL> No I was talking about mounting the same clone on 2 different servers
[17:57] <JCL> clone1 on server1 and clone1 on server2
[17:58] <JCL> Both server1 and server2 writing to clone1
[17:59] * chumbri (~oftc-webi@62.12.129.162) Quit (Quit: Page closed)
[18:00] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[18:00] * brambles (lechuck@s0.barwen.ch) Quit (Remote host closed the connection)
[18:00] <alphe> jcl yes I know I can t mount the same rbd image on 2 different servers I already experienced that :)
[18:00] <alphe> and it was producing unconsistencies
[18:01] <alphe> ok so logically that should work properly
[18:01] * huangjun_ (~oftc-webi@117.151.48.214) has joined #ceph
[18:03] * huangjun_ (~oftc-webi@117.151.48.214) Quit (Remote host closed the connection)
[18:04] * abonilla (~abonilla@c-69-253-241-144.hsd1.de.comcast.net) has joined #ceph
[18:06] <abonilla> hi - is there a way to rebuild my ceph cluster, but just have it rebuilt all OSDs and PGs. It currently says 132 pgs degraded; 51 pgs peering; 141 pgs stale; 36 pgs stuck inactive; 141 pgs stuck stale; 192 pgs stuck unclean; 1/7 in osds are down
[18:08] * brytown (~brytown@142-254-47-204.dsl.dynamic.sonic.net) has joined #ceph
[18:09] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:10] * huangjun (~kvirc@117.151.48.214) Quit (Ping timeout: 480 seconds)
[18:12] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[18:13] * fdmanana (~fdmanana@bl10-142-30.dsl.telepac.pt) has joined #ceph
[18:14] * aldavud (~aldavud@213.55.184.149) has joined #ceph
[18:17] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[18:19] * diegows (~diegows@190.190.5.238) has joined #ceph
[18:19] * koleosfuscus (~koleosfus@ws116-110.unine.ch) Quit (Quit: koleosfuscus)
[18:21] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:22] * brytown (~brytown@142-254-47-204.dsl.dynamic.sonic.net) Quit (Quit: Leaving.)
[18:24] * aldavud (~aldavud@213.55.184.149) Quit (Ping timeout: 480 seconds)
[18:25] * diegows (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[18:25] * madkiss1 (~madkiss@chello084112124211.20.11.vie.surfer.at) Quit (Quit: Leaving.)
[18:28] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:31] * jskinner (~jskinner@69.170.148.179) Quit (Remote host closed the connection)
[18:32] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[18:32] * analbeard (~shw@support.memset.com) has joined #ceph
[18:32] * wattsmarcus5 (~mdw@aa2.linuxbox.com) Quit (Ping timeout: 480 seconds)
[18:35] * danieagle (~Daniel@186.214.48.173) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[18:35] * odyssey4me (~odyssey4m@165.233.71.2) Quit (Ping timeout: 480 seconds)
[18:38] * diegows (~diegows@190.190.5.238) has joined #ceph
[18:38] * jskinner (~jskinner@69.170.148.179) Quit (Read error: Operation timed out)
[18:39] * rweeks (~goodeats@c-24-6-118-113.hsd1.ca.comcast.net) has joined #ceph
[18:42] * koleosfuscus (~koleosfus@ws116-110.unine.ch) has joined #ceph
[18:44] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[18:45] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[18:48] * yuriw (~Adium@121.243.198.77.rev.sfr.net) has joined #ceph
[18:49] * yuriw (~Adium@121.243.198.77.rev.sfr.net) Quit ()
[18:51] * wattsmarcus5 (~mdw@aa2.linuxbox.com) has joined #ceph
[18:53] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[18:58] * xarses (~andreww@12.164.168.117) has joined #ceph
[19:01] <ponyofdeath> hi, where in the output of ceph --admin-daemon /var/run/ceph/rbd-20828.asok perf dump can i see how much of the cache its using?
[19:03] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[19:07] * lalatenduM (~lalatendu@122.167.40.237) Quit (Read error: Operation timed out)
[19:17] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Remote host closed the connection)
[19:18] * koleosfuscus is now known as Guest13186
[19:18] * koleosfuscus (~koleosfus@164-236.197-178.cust.bluewin.ch) has joined #ceph
[19:18] * koleosfuscus (~koleosfus@164-236.197-178.cust.bluewin.ch) Quit ()
[19:21] * rweeks (~goodeats@c-24-6-118-113.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[19:21] * Guest13186 (~koleosfus@ws116-110.unine.ch) Quit (Ping timeout: 480 seconds)
[19:21] * KindOne (kindone@0001a7db.user.oftc.net) has joined #ceph
[19:26] * thomnico (~thomnico@2a01:e35:8b41:120:213f:c6e2:f791:8d5e) Quit (Quit: Ex-Chat)
[19:26] * JCL (~JCL@2601:9:5980:39b:11be:465b:18d:1c11) Quit (Quit: Leaving.)
[19:30] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[19:30] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[19:30] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[19:33] * JCL (~JCL@2601:9:5980:39b:19ea:54b5:8bde:d8ef) has joined #ceph
[19:36] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[19:36] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:39] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[19:40] * KB (~oftc-webi@cpe-74-137-252-159.swo.res.rr.com) has joined #ceph
[19:46] * KB (~oftc-webi@cpe-74-137-252-159.swo.res.rr.com) Quit (Remote host closed the connection)
[19:50] * lofejndif (~lsqavnbok@tor-exit.eecs.umich.edu) has joined #ceph
[19:51] * dneary (~dneary@87-231-145-225.rev.numericable.fr) Quit (Ping timeout: 480 seconds)
[19:51] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[19:52] * tracphil (~tracphil@130.14.71.217) has joined #ceph
[19:55] * haomaiwang (~haomaiwan@125-227-255-23.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[19:58] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) has joined #ceph
[19:59] * dneary (~dneary@87-231-145-225.rev.numericable.fr) has joined #ceph
[20:01] <alphe> abonilla you can force a scrub
[20:01] <alphe> you can do a reweight-by-utilization 119 that will makes your whole cluster to reorganise
[20:04] * lalatenduM (~lalatendu@122.167.40.237) has joined #ceph
[20:04] * lalatenduM (~lalatendu@122.167.40.237) Quit (Quit: Leaving)
[20:07] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[20:11] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[20:16] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[20:18] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:19] <vilobhmm> hi
[20:19] <vilobhmm> How does CEPH guarantee data isolation for volumes which are not meant to be shared in a Openstack tenant?
[20:19] <vilobhmm> When used with OpenStack the data isolation is provided by the Openstack level so that all users who are part of same tenant will be able to access/share the volumes created by users in that tenant. Consider a case where we have one pool named ???Volumes??? for all the tenants. All the tenants use the same keyring to access the volumes in the pool.
[20:19] <vilobhmm> How do we guarantee that one user can???t see the contents of the volumes created by another user; if the volume is not meant to be shared.
[20:19] <vilobhmm> If someone malicious user gets the access to the keyring (which we used as a authentication mechanism between the client/Openstack and CEPH) how does CEPH guarantee that the malicious user can???t access the volumes in that pool.
[20:19] <vilobhmm> Lets say our Cinder services are running on the Openstack API node. How does the CEPH keyring information gets transferred from the API node to the Hypervisor node ? Does this keyring passed through message queue? If yes can the malicious user have a look at the message queue and grab this keyring information ? If not then how does it reach from the API node to the Hypervisor node.
[20:31] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:bcb7:980a:7383:581d) has joined #ceph
[20:34] * imriz (~imriz@82.81.163.130) Quit (Quit: Leaving)
[20:34] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[20:35] * Nacer_ (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[20:35] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[20:38] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Ping timeout: 480 seconds)
[20:52] * alexxy (~alexxy@2001:470:1f14:106::2) has joined #ceph
[20:59] * rendar (~I@host138-179-dynamic.12-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[21:03] * wschulze1 (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[21:04] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:bcb7:980a:7383:581d) Quit (Ping timeout: 480 seconds)
[21:07] * sarob (~sarob@2601:9:1d00:c7f:8c05:c111:f0cc:406a) Quit (Remote host closed the connection)
[21:23] * dmick (~dmick@2607:f298:a:607:202f:72f7:1a7d:b560) Quit (Ping timeout: 480 seconds)
[21:27] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[21:29] * allsystemsarego (~allsystem@188.27.188.69) Quit (Read error: Operation timed out)
[21:30] * allsystemsarego (~allsystem@86.121.2.97) has joined #ceph
[21:32] * dmick (~dmick@2607:f298:a:607:ad:659f:ade0:fffb) has joined #ceph
[21:34] * erice (~erice@50.240.86.181) has joined #ceph
[21:35] * sleinen (~Adium@2001:620:1000:4:d1f8:a284:ee2f:c20f) has joined #ceph
[21:36] * allsystemsarego (~allsystem@86.121.2.97) Quit (Quit: Leaving)
[21:36] * sleinen1 (~Adium@2001:620:0:26:c12:ad5b:eed1:27aa) has joined #ceph
[21:39] * jeremy__1s (~jeremy__s@AStDenis-552-1-167-139.w80-8.abo.wanadoo.fr) has joined #ceph
[21:40] * jeremy___s (~jeremy__s@AStDenis-552-1-167-139.w80-8.abo.wanadoo.fr) Quit (Read error: Connection reset by peer)
[21:40] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) has joined #ceph
[21:42] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[21:43] * sleinen (~Adium@2001:620:1000:4:d1f8:a284:ee2f:c20f) Quit (Ping timeout: 480 seconds)
[21:49] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[21:49] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) Quit (Ping timeout: 480 seconds)
[21:50] * mtanski (~mtanski@65.107.210.227) has joined #ceph
[21:51] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[21:55] <janos_> vilobhmm, that is in no an area i am strong, but all your questions sound like things that ceph has no cern with
[21:55] <janos_> cern/concern
[21:56] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[21:56] <janos_> like your case where a malicious user has the keyring info - how does ceph prevent guarantee no access of volumes in the pool? if the keyring is the access method, ceph doesn't know or care that they are malicious
[21:56] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobhmm)
[21:57] <janos_> in fact, by having auth info, they are not malicious as far as anything can tell
[21:58] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:00] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:bcb7:980a:7383:581d) has joined #ceph
[22:01] <vilobhmm> janos_ : since you are saying ???ceph doesn't know or care that they are malicious??? so does that mean that data can be accessed if someone has a hold of the keyring
[22:01] <janos_> generically, if someone has access, they have access. i would think
[22:01] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) has joined #ceph
[22:01] <janos_> the malicious part sounds liek mind-reading
[22:03] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:06] <seapasulli> whenever I run "ceph-disk" list or "ceph-deploy disk list" I get a traceback error but if I run ceph-deploy zap ${host}:disk it works. anyone see this before?
[22:07] <alfredodeza> seapasulli: can you show us a paste of the errors
[22:07] <alfredodeza> ?
[22:09] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[22:12] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[22:13] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[22:16] <vilobhmm> janos_ : replace malicious with any user who wants to have access to the ceph cluster apart from the user who created the volumes
[22:16] <Serbitar> well openstack will be holdfing the keychain right
[22:16] <Serbitar> so they would have to compromise openstack
[22:16] * mtanski (~mtanski@65.107.210.227) Quit (Quit: mtanski)
[22:17] * erikl (~lukace@nut-252.br-online.de) Quit (Quit: My damn controlling terminal disappeared!)
[22:17] * sleinen1 (~Adium@2001:620:0:26:c12:ad5b:eed1:27aa) Quit (Ping timeout: 480 seconds)
[22:18] * hasues1 (~hazuez@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[22:19] <seapasulli> sure thing alfredodeza
[22:19] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobhmm)
[22:19] * sleinen (~Adium@2001:620:0:26:f563:a181:42e9:f0fe) has joined #ceph
[22:21] <seapasulli> alfredodeza: http://pastebin.com/kfb3Yhvt
[22:21] <seapasulli> if I do ceph-deploy disk list it seems to be the same
[22:22] * ajazdzewski (~quassel@2001:4dd0:ff00:9081:bcb7:980a:7383:581d) Quit (Ping timeout: 480 seconds)
[22:23] <alfredodeza> seapasulli: what version of ceph are you using? and what does fdisk -l says?
[22:23] * hasues (~hazuez@kwfw01.scrippsnetworksinteractive.com) Quit (Ping timeout: 480 seconds)
[22:24] * xarses (~andreww@12.164.168.117) has joined #ceph
[22:24] <seapasulli> alfredodeza: emperor and http://pastebin.com/Yg1h7Rrr
[22:25] <seapasulli> it was working fine this morning
[22:25] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:25] <alfredodeza> seapasulli: this looks terrible (the traceback) but it looks like it is ceph-disk
[22:26] <alfredodeza> but the traceback is so obscure that I can't tell for sure what is going on
[22:26] <alfredodeza> seapasulli: what version of ceph-deploy are you using?
[22:26] <alfredodeza> you could try 'ceph-deploy osd list {node}'
[22:26] <seapasulli> I did a ceph-deploy --dmcrypt --dmcrypt-key-dir of all of the drives and it worked fine. ceph-disk prepare seems to work just fine too
[22:26] <alfredodeza> if you are on 1.5 or newer
[22:27] <seapasulli> after that ceph-deploy disk list stopped working
[22:28] * ScOut3R (~ScOut3R@51B61466.dsl.pool.telekom.hu) has joined #ceph
[22:28] * jrankin (~jrankin@d47-69-66-231.try.wideopenwest.com) Quit (Quit: Leaving)
[22:29] <seapasulli> gah using ceph-deploy 1.4
[22:30] <alfredodeza> try updating ceph-deploy
[22:30] <alfredodeza> it should be safe to do so
[22:30] * dereky_ (~derek@proxy00.umiacs.umd.edu) has joined #ceph
[22:30] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:31] * dereky (~derek@proxy00.umiacs.umd.edu) Quit (Ping timeout: 480 seconds)
[22:31] * dereky_ is now known as dereky
[22:32] * mtanski (~mtanski@65.107.210.227) has joined #ceph
[22:33] <seapasulli> there is some weird issue with the trusty ceph firefly repo for me. apt just times out. It works for emperor though.
[22:35] * rweeks (~goodeats@192.169.20.75.static.etheric.net) has joined #ceph
[22:36] <seapasulli> if it's ceph-disk can't I just re-install ceph common? I don't understand how this could happen
[22:37] * ganders (~root@200-127-158-54.net.prima.net.ar) Quit (Quit: WeeChat 0.4.1)
[22:37] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[22:40] * mtanski (~mtanski@65.107.210.227) Quit (Ping timeout: 480 seconds)
[22:41] * ScOut3R (~ScOut3R@51B61466.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[22:43] * rweeks (~goodeats@192.169.20.75.static.etheric.net) Quit (Quit: Leaving)
[22:45] * fdmanana (~fdmanana@bl10-142-30.dsl.telepac.pt) Quit (Quit: Leaving)
[22:45] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[22:54] * danieagle (~Daniel@186.214.48.173) has joined #ceph
[22:54] <seapasulli> alfredodeza: 1.5 lets me see a list of osds but it looks like none exist. The one test example I made for osd.2 from ceph-disk worked and it's running but any ceph-deploy disk list commands still fail
[22:54] <seapasulli> and it looks like I cant provision disks that way either
[22:55] <alfredodeza> seapasulli: you might be hitting a bug :(
[22:56] <alfredodeza> have you tried the mailing list? you would get more traction there from people that know about Emperor
[22:56] <alfredodeza> what you are seeing seems a bit out of my leage
[22:56] <alfredodeza> *league
[23:00] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[23:00] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[23:01] <seapasulli> aw thanks alfredodeza
[23:01] <seapasulli> what version of ceph are you running?
[23:02] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: Textual IRC Client: www.textualapp.com)
[23:04] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[23:06] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[23:09] * drankis_ (~drankis__@37.148.173.239) Quit (Ping timeout: 480 seconds)
[23:14] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:19] <ikrstic> seapasulli: Did you try ceph-disk -v list?
[23:21] <seapasulli> alfredodeza: so when I do disk zap of ALL of the drives on the host. disk list works again
[23:21] <seapasulli> yeah it just reports the same ikrstic
[23:21] <seapasulli> same trace
[23:21] <seapasulli> so it's obviously something dumb i did with dmcrypt or something..
[23:22] <seapasulli> anyone have a working cluster with dmcrypt?
[23:24] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:24] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:27] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobhmm)
[23:27] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[23:32] * aldavud_ (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[23:35] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[23:36] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[23:36] * leseb (~leseb@185.21.174.206) has joined #ceph
[23:36] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) Quit (Quit: Leaving)
[23:38] * sarob (~sarob@2601:9:1d00:c7f:f909:5ec2:5166:e8ce) has joined #ceph
[23:38] * rendar (~I@host138-179-dynamic.12-79-r.retail.telecomitalia.it) has joined #ceph
[23:38] * markbby (~Adium@168.94.245.1) Quit (Quit: Leaving.)
[23:41] * rweeks (~goodeats@192.169.20.75.static.etheric.net) has joined #ceph
[23:43] <mo-> "[19551]: (33) Numerical argument out of domain" has anybody seen that before while trying to start a monitor that has crashed?
[23:46] * sarob (~sarob@2601:9:1d00:c7f:f909:5ec2:5166:e8ce) Quit (Ping timeout: 480 seconds)
[23:49] * BManojlovic (~steki@cable-94-189-165-169.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:55] * lofejndif (~lsqavnbok@6FMAABLO9.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[23:56] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[23:57] <mo-> 0> 2014-06-10 22:56:01.839587 b7208740 -1 mon/PGMonitor.cc: In function 'virtual void PGMonitor::update_from_paxos()' thread b7208740 time 2014-06-10 22:56:01.838568
[23:57] <mo-> mon/PGMonitor.cc: 173: FAILED assert(err == 0)
[23:57] <mo-> wondering how I can get that monitor back up..

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.