#ceph IRC Log

Index

IRC Log for 2014-07-01

Timestamps are in GMT/BST.

[0:00] <dmick> I don't nkow how common it is, but it would be ideal from my limited perspective to have the distros treat the ceph version as authoritative, and automatically edit additions. I get the impression that's not normally how packaging is done, but it seems like it would involve less interpretation on the distro maintainers' side
[0:00] <dmick> not really in tune with distro packaging convention/workflow yet
[0:00] <_Tass4dar> are you officially the maintainer for fedora?
[0:00] <dmick> I am a maintainer
[0:00] <dmick> but brand new and have done nothing so far.
[0:01] <_Tass4dar> well then i think you are more or less free to define your own workflow in this regard
[0:01] <_Tass4dar> as long as the building/koji-stuff is nicely aligned with fedora standards
[0:01] <dmick> that may be; I'll try to integrate that perspective as I read through the tiny set of requirements and documentation
[0:01] <_Tass4dar> anyway i will check it out later this week
[0:02] <dmick> ;-}
[0:02] <_Tass4dar> in my tz it's bed time, so good night ;)
[0:03] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:03] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[0:03] * sage (~quassel@cpe-23-242-158-79.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[0:04] <MACscr> dmick: http://rainbow.chard.org/2013/01/30/how-to-align-partitions-for-best-performance-using-parted/
[0:05] <MACscr> there is a few calculations that have to be made
[0:05] <MACscr> but as it mentions, even when setting up a partition with parted, it will complain if the alignment isnt optimal
[0:07] <MACscr> but anyway, i see it has an option for checking the alignments and they come up optimal
[0:07] <dmick> oh. for 4k drives.
[0:07] <dmick> or arrays with larger I/O sizes
[0:07] <dmick> sre.
[0:07] * sage (~quassel@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[0:07] * ChanServ sets mode +o sage
[0:08] <dmick> yeah, ceph-disk doesn't try; perhaps sgdisk does, I really don't know
[0:08] <dmick> see man sgdisk, -a option
[0:09] <MACscr> ok, so all my disks were prepared and partitioned the last time i setup this cluster. I created two pools, but the osd's are not part of any of them yet
[0:10] <MACscr> is dumping the crush map and editing it the only way to assign an osd to a pool?
[0:10] * sarob (~sarob@2001:4998:effd:600:64af:33d:df0a:d075) Quit (Remote host closed the connection)
[0:10] * primechu_ (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[0:10] * sarob (~sarob@2001:4998:effd:600:64af:33d:df0a:d075) has joined #ceph
[0:13] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[0:13] <dmick> not sure what you mean; osds aren't "assigned to pools"; pools are divisions of the cluster storage, and objects in the pool are split into PGs which are assigned to OSDs by CRUSH
[0:13] * ScOut3R (~ScOut3R@4E5CC1B5.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[0:14] <MACscr> well i have ssd's for the cache pool and spindles for the cold storage. Trying to figure out how to get this setup
[0:17] * rendar (~I@host149-182-dynamic.37-79-r.retail.telecomitalia.it) Quit ()
[0:18] * sarob (~sarob@2001:4998:effd:600:64af:33d:df0a:d075) Quit (Ping timeout: 480 seconds)
[0:19] <dmick> oh, cache pool
[0:20] <MACscr> isnt a cache pool just a pool configured for caching?
[0:20] <dmick> with magical rules for interoperating iwth the backing pool, yes
[0:21] <dmick> afaik setting up cache tiers requires you to edit the crush map, yes.
[0:22] <dmick> (if you want to control where the cache pool lives, which you probably do)
[0:22] <MACscr> basically im just setting things up with 3 nodes with each node having 2 x 512GB SSD's and a single 2TB hard drive
[0:23] <MACscr> i will add a second 2TB HD to each node as well very soon
[0:25] * zviratko (~zviratko@241-73-239-109.cust.centrio.cz) Quit (Read error: Connection reset by peer)
[0:26] <MACscr> so i dont see how i assign an osd to a pool within a crashmap
[0:27] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[0:27] <iggy> ceph really isn't one of those pieces of software that you can completely master by copying and pasting a list of commands into a terminal
[0:27] <MACscr> who said anything about mastering? im just trying to get a basic setup with 3 nodes and 9 disks with a cache pool
[0:28] * rturk is now known as rturk|afk
[0:28] <iggy> that's not a basic setup, just fyi
[0:29] <iggy> the cache pool code just made it into a stable, documented release like a month ago
[0:30] * zviratko (~zviratko@241-73-239-109.cust.centrio.cz) has joined #ceph
[0:30] <MACscr> well right now i should just be concentrating on getting the osd's/disks assigned to different pools, right?
[0:31] * cronix1 (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[0:31] <iggy> I'd personally worried about getting any pools working with any OSD's first, then adding the cache tier is pretty simple
[0:32] <iggy> it really sounds like you're trying to put round piece in square hole
[0:32] * rturk|afk is now known as rturk
[0:32] <iggy> I'll quote from about 15 lines up "osds aren't "assigned to pools"; pools are divisions of the cluster storage, and objects in the pool are split into PGs which are assigned to OSDs by CRUSH"
[0:33] <iggy> i.e. quit trying to assign pools to OSDs or vice versa
[0:33] * sarob (~sarob@2001:4998:effd:600:3dc3:d879:71e9:5699) has joined #ceph
[0:34] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[0:34] <MACscr> well i had a basic pool setup, but then all 9 disks were in it, which obviously wasnt going to work for me. I was told that you needed to assign the pool when they were created, hence why i restarted the process
[0:36] <iggy> I would start by making a fresh/clean setup using the spinners and journals, then as a separate/later step, add the cache pool
[0:36] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[0:36] * dis (~dis@109.110.66.229) has joined #ceph
[0:37] <iggy> that's how the docs are written and that's probably how every setup that is using a cache pool has been setup
[0:37] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[0:48] * wusui (~Warren@2607:f298:a:607:c155:2479:68ba:3eeb) has joined #ceph
[0:48] * fsimonce (~simon@host27-60-dynamic.26-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:49] * rturk is now known as rturk|afk
[0:50] <Anticimex> loicd: why'd it be difficult to experience running multi-PB cluster without loosing data? ;)
[0:50] <Anticimex> or well, s/production/test/, i could spin it up on AWS.. :)
[0:50] * wusui (~Warren@2607:f298:a:607:c155:2479:68ba:3eeb) has left #ceph
[0:50] <Anticimex> (for <=60 min)
[0:50] * adamcrume (~quassel@c-71-204-162-10.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[0:53] * zidarsk81 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[0:53] * zidarsk81 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[0:53] * rturk|afk is now known as rturk
[0:54] * sarob (~sarob@2001:4998:effd:600:3dc3:d879:71e9:5699) Quit (Remote host closed the connection)
[0:54] * sarob (~sarob@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[0:56] <iggy> that was my plan
[0:56] <iggy> just so I could be at the top of the list on brag.ceph.com
[0:57] <iggy> but I actually did the math and even an hour of that (assuming you could get the cluster up in that time frame... unlikely) was like 30k
[0:57] * sarob (~sarob@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (Read error: Operation timed out)
[0:59] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[1:00] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[1:00] <keds> The docs I've read so far place the cache tier on it's own separate hardware
[1:01] <keds> Is there anything stopping us from using SSDs in the existing nodes?
[1:01] <iggy> nope
[1:01] <kraken> http://i.minus.com/iUgVCKwjISSke.gif
[1:03] <Anticimex> iggy: oh, crap.
[1:03] <Anticimex> iggy: oh well, scratch that idea then. :)
[1:04] <iggy> yeah, I'm not blowing half my yearly salary to be on brag.ceph.com
[1:04] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[1:06] * zack_dolby (~textual@pdf8519e7.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:09] * cronix1 (~cronix@5.199.139.166) Quit (Ping timeout: 480 seconds)
[1:09] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[1:10] * sarob (~sarob@2001:4998:effd:600:a9eb:df20:71a0:7def) has joined #ceph
[1:13] <MACscr> ok, i definitely did create myself a mess. if I run purge and purgedata with ceph-deploy, do i need to do anything else to restart things? seems some osd's are showing up that i havent done anything with since i recreated the cluster
[1:16] <MACscr> frustrating because i got my original cluster up pretty quickly
[1:16] <MACscr> so there must be leftovers that arent being purged, etc
[1:17] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[1:18] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[1:20] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[1:24] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[1:25] * huangjun (~kvirc@117.151.54.155) Quit (Ping timeout: 480 seconds)
[1:30] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[1:32] * rturk is now known as rturk|afk
[1:36] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Read error: Connection reset by peer)
[1:36] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[1:36] * ChanServ sets mode +v andreask
[1:38] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[1:38] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: Leaving)
[1:39] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[1:41] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[1:44] * bkero (~bkero@216.151.13.66) Quit (Remote host closed the connection)
[1:46] * bkero (~bkero@216.151.13.66) has joined #ceph
[1:48] * mrjack (mrjack@office.smart-weblications.net) Quit (Ping timeout: 480 seconds)
[1:48] * mrjack (mrjack@pD95F2366.dip0.t-ipconnect.de) has joined #ceph
[1:49] * bkero (~bkero@216.151.13.66) Quit ()
[1:54] * ircolle is now known as ircolle-away
[1:57] * fdmanana (~fdmanana@bl5-3-231.dsl.telepac.pt) Quit (Quit: Leaving)
[1:58] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[1:59] * qhartman (~qhartman@64.207.33.50) Quit (Remote host closed the connection)
[1:59] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[2:00] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:03] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[2:06] * zack_dolby (~textual@e0109-49-132-43-197.uqwimax.jp) has joined #ceph
[2:08] * rwheeler (~rwheeler@173.48.207.57) Quit (Quit: Leaving)
[2:10] * alram (~alram@cpe-76-167-62-129.socal.res.rr.com) Quit (Quit: Lost terminal)
[2:10] * huangjun (~kvirc@111.174.239.37) has joined #ceph
[2:16] * mrjack (mrjack@pD95F2366.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[2:16] * mrjack (mrjack@office.smart-weblications.net) has joined #ceph
[2:23] * dmit2k (~Adium@balticom-131-176.balticom.lv) Quit (Ping timeout: 480 seconds)
[2:27] * rturk|afk is now known as rturk
[2:28] * rturk is now known as rturk|afk
[2:31] * Zethrok (~martin@95.154.26.254) Quit (Ping timeout: 480 seconds)
[2:32] * LeaChim (~LeaChim@host86-161-90-122.range86-161.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:35] * Zethrok (~martin@95.154.26.254) has joined #ceph
[2:39] * Pedras (~Adium@216.207.42.132) Quit (Ping timeout: 480 seconds)
[2:39] * sarob (~sarob@2001:4998:effd:600:a9eb:df20:71a0:7def) Quit (Remote host closed the connection)
[2:39] * sarob (~sarob@2001:4998:effd:600:a9eb:df20:71a0:7def) has joined #ceph
[2:47] * sarob (~sarob@2001:4998:effd:600:a9eb:df20:71a0:7def) Quit (Ping timeout: 480 seconds)
[2:48] * sz0_ (~sz0@46.197.48.116) has joined #ceph
[2:52] * vbellur (~vijay@122.178.240.55) Quit (Read error: Operation timed out)
[2:56] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[2:58] * yguang11 (~yguang11@2406:2000:ef96:e:111c:9af4:7792:f0a) has joined #ceph
[3:05] * vbellur (~vijay@122.172.246.244) has joined #ceph
[3:09] * zack_dolby (~textual@e0109-49-132-43-197.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:09] * narb (~Jeff@38.99.52.10) Quit (Quit: narb)
[3:10] * KaZeR (~kazer@64.201.252.132) Quit (Ping timeout: 480 seconds)
[3:12] * yanzheng (~zhyan@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[3:12] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[3:15] * shang (~ShangWu@175.41.48.77) has joined #ceph
[3:21] * diegows (~diegows@190.190.5.238) Quit (Read error: Operation timed out)
[3:24] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[3:31] * rweeks (~rweeks@c-24-6-118-113.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[3:32] * zhaochao (~zhaochao@106.38.204.75) has joined #ceph
[3:32] * zack_dolby (~textual@e0109-49-132-43-197.uqwimax.jp) has joined #ceph
[3:35] * habalux (teemu@host-109-204-170-212.tp-fne.tampereenpuhelin.net) Quit (Remote host closed the connection)
[3:40] * habalux (teemu@host-109-204-170-212.tp-fne.tampereenpuhelin.net) has joined #ceph
[3:42] * shang (~ShangWu@175.41.48.77) Quit (Read error: Connection reset by peer)
[3:47] * zack_dolby (~textual@e0109-49-132-43-197.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:53] * zack_dolby (~textual@e0109-49-132-43-197.uqwimax.jp) has joined #ceph
[3:54] * zhaochao (~zhaochao@106.38.204.75) Quit (Remote host closed the connection)
[3:56] * zack_dolby (~textual@e0109-49-132-43-197.uqwimax.jp) Quit ()
[4:00] * Cube (~Cube@66-87-64-58.pools.spcsdns.net) Quit (Quit: Leaving.)
[4:03] * shang (~ShangWu@114-32-21-24.HINET-IP.hinet.net) has joined #ceph
[4:04] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[4:14] * zhaochao (~zhaochao@123.151.134.228) has joined #ceph
[4:16] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[4:19] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[4:22] <MACscr> im getting a python trackback response after i upgraded to ceph-deploy 1.5.6
[4:22] <MACscr> http://pastie.org/pastes/9342168/text?key=phdwrpemkqvva5exkm3a
[4:30] * bkopilov (~bkopilov@213.57.17.152) Quit (Ping timeout: 480 seconds)
[4:49] * jtaguinerd (~jtaguiner@103.14.60.184) has joined #ceph
[4:52] <bens> how did you install it?
[4:53] <bens> don't matter - there is a bug.
[4:56] <bens> looking at it more closely
[4:57] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) Quit (Quit: Leaving)
[5:04] * Vacum_ (~vovo@88.130.199.119) has joined #ceph
[5:05] * rongze (~rongze@114.54.30.94) has joined #ceph
[5:11] * Vacum (~vovo@88.130.204.185) Quit (Ping timeout: 480 seconds)
[5:12] * chrisjones (~chrisjone@12.237.137.162) Quit (Quit: chrisjones)
[5:16] <bens> MACscr: if you want to fix it, you can fix line 377 in osd.py
[5:16] <bens> add "remoto" to it.
[5:16] <bens> I told the devs.
[5:16] <bens> /usr/lib/python2.7/dist-packages/ceph_deploy/osd.py
[5:17] <dmick> bens: did you file a bug?
[5:18] <bens> dmick: working on it now
[5:18] <dmick> k. was gonna offer but tnx
[5:18] <bens> you should do it though
[5:18] <bens> i don't know the lingo
[5:18] <bens> https://github.com/ceph/ceph-deploy/commit/2d44b8b6b55c15bc38f9d70495aecd14d249d38f#commitcomment-6849784
[5:18] <bens> that's the exact problem
[5:19] <dmick> where did you tell the devs? I'll make sure that gets a comment that I filed it
[5:19] <bens> #ceph-devel
[5:20] <bens> i want to file it but i don't know what kind of priority this is (i'd assume high)
[5:20] <bens> or immediate
[5:21] <bens> severity, etc.
[5:24] <dmick> you can always be overruled
[5:24] <dmick> but things to consider: is it core functionality, is there a workaround, etc.
[5:24] <dmick> in this case it's sorta secondary, but fatal (not everyone has to zap, but if you do you're screwed)
[5:25] <dmick> so I'd call it middle priority/high severity
[5:26] * zack_dolby (~textual@e0109-49-132-43-197.uqwimax.jp) has joined #ceph
[5:27] * leochill (~leochill@nyc-333.nycbit.com) Quit (Quit: Leaving)
[5:34] * wenjunh (~wenjunh@corp-nat.peking.corp.yahoo.com) has joined #ceph
[5:36] * beardo_ (~sma310@216-164-125-67.c3-0.drf-ubr1.atw-drf.pa.cable.rcn.com) Quit (Read error: Operation timed out)
[5:38] * zack_dolby (~textual@e0109-49-132-43-197.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[5:40] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[5:40] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[5:45] <wenjunh> Hello, when I add swift key to a subuser It will always output some error
[5:45] <wenjunh> -bash-4.1$ sudo radosgw-admin key create --subuser=john:sub1 --key-type=swift
[5:45] <wenjunh> could not create key: unable to add access key, unable to store user info
[5:45] <wenjunh> 2014-07-01 03:03:28.022071 7fbe1f81f820 0 WARNING: can't store user info, swift id () already mapped to another user (john)
[5:46] <wenjunh> The user info is:
[5:47] <wenjunh> { "user_id": "john",
[5:47] <wenjunh> "display_name": "john",
[5:47] <wenjunh> "email": "",
[5:47] <wenjunh> "suspended": 0,
[5:47] <wenjunh> "max_buckets": 1000,
[5:47] <wenjunh> "auid": 0,
[5:47] <wenjunh> "subusers": [
[5:47] <wenjunh> { "id": "john:sub1",
[5:47] <wenjunh> "permissions": "full-control"}],
[5:47] <wenjunh> "keys": [
[5:47] <wenjunh> { "user": "john",
[5:47] <wenjunh> "access_key": "4UGYI55F889HUC807JXR",
[5:47] <wenjunh> "secret_key": "Qh+dI5lN7hsSzDo4EhixiLJ66muRDhVh+hvAAYhR"},
[5:47] <wenjunh> { "user": "john:sub1",
[5:47] <wenjunh> "access_key": "7D85MXSMR3WSKI57VHT4",
[5:47] <wenjunh> "secret_key": ""}],
[5:47] <wenjunh> "swift_keys": [],
[5:47] <wenjunh> "caps": [],
[5:47] <wenjunh> "op_mask": "read, write, delete",
[5:47] <wenjunh> "default_placement": "",
[5:47] <wenjunh> "placement_tags": [],
[5:47] <wenjunh> "bucket_quota": { "enabled": false,
[5:47] <wenjunh> "max_size_kb": -1,
[5:47] <wenjunh> "max_objects": -1},
[5:47] <wenjunh> "user_quota": { "enabled": false,
[5:47] <wenjunh> "max_size_kb": -1,
[5:47] <wenjunh> "max_objects": -1},
[5:47] <wenjunh> "temp_url_keys": []}
[5:48] <wenjunh> Is anyone encounter the problem before?
[5:51] * tserong (~tserong@203-57-208-132.dyn.iinet.net.au) has joined #ceph
[5:55] * shang (~ShangWu@114-32-21-24.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[5:56] * orua (~orua@mike-alien.esc.auckland.ac.nz) has joined #ceph
[5:56] * orua (~orua@mike-alien.esc.auckland.ac.nz) Quit ()
[6:00] * jamespage (~jamespage@culvain.gromper.net) Quit (Quit: Coyote finally caught me)
[6:00] * jtaguinerd1 (~jtaguiner@103.14.60.184) has joined #ceph
[6:00] * jamespage (~jamespage@culvain.gromper.net) has joined #ceph
[6:01] * shang (~ShangWu@175.41.48.77) has joined #ceph
[6:02] * yguang11 (~yguang11@2406:2000:ef96:e:111c:9af4:7792:f0a) Quit (Read error: Connection reset by peer)
[6:03] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[6:05] * jtaguinerd (~jtaguiner@103.14.60.184) Quit (Ping timeout: 480 seconds)
[6:07] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[6:09] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[6:10] * oblu (~o@62.109.134.112) has joined #ceph
[6:13] * shang (~ShangWu@175.41.48.77) Quit (Remote host closed the connection)
[6:15] * joef1 (~Adium@c-67-188-220-98.hsd1.ca.comcast.net) has joined #ceph
[6:24] <gleam> wowspam
[6:24] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:25] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[6:31] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[6:40] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[6:41] * rongze (~rongze@114.54.30.94) Quit (Ping timeout: 480 seconds)
[6:50] * Pedras (~Adium@50.185.218.255) has joined #ceph
[6:53] <MACscr> when i run ceph -s from a system, does it need access to the cluster or public network?
[6:53] * Pedras (~Adium@50.185.218.255) Quit ()
[6:54] <MACscr> also, what all do i need to do in order to restart everything from scratch short of wiping the boxes completely? Doesnt seem like ceph-deploy purge and purgedata is enough. still see osd's showing up that shouldnt exist anymore
[6:54] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:55] <MACscr> well, shouldnt exit if those tools really start me over from scratch
[6:55] <MACscr> ive even zapped every disk over again
[6:56] * rongze (~rongze@114.54.30.94) has joined #ceph
[6:56] <dmick> public: virtually everything except OSD-OSD traffic
[6:56] <dmick> "restart everything from scratch": mons too?
[6:56] <MACscr> yep, mons are on the same system as the osd's
[6:56] <MACscr> ive tried as far as even just doing a default install with no special network settings and im still getting fault errors
[6:57] <dmick> if you've really stopped and removed all mons and osds then there's nothing left to know about anything, so there must be some problem with that assertion
[6:57] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[6:57] <MACscr> well how do i remove them other than simply doing those two commands i just mentioned
[6:58] <dmick> I don't know for sure without reproducing,b ut I know how to use ps and ls and stuff to find out if those commands worked
[7:00] <dmick> but if there are no mons and no osds and no packages then there's also no ceph command to look for osds or a mon to answer the ceph command
[7:02] * rdas (~rdas@121.244.87.115) has joined #ceph
[7:05] * Pedras (~Adium@50.185.218.255) has joined #ceph
[7:05] * joef1 (~Adium@c-67-188-220-98.hsd1.ca.comcast.net) has left #ceph
[7:09] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:10] * ScOut3R (~ScOut3R@4E5CC1B5.dsl.pool.telekom.hu) has joined #ceph
[7:14] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[7:25] * ScOut3R (~ScOut3R@4E5CC1B5.dsl.pool.telekom.hu) Quit (Ping timeout: 480 seconds)
[7:26] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:27] * Pedras (~Adium@50.185.218.255) Quit (Quit: Leaving.)
[7:29] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[7:36] <MACscr> dmick: which systems are the crushmaps stored on? just wanting to make sure they are completely gone so im starting over from scratch. I have gone ahead and stopped the ceph service, made sure no ceph processes were still running on any of the systems, ran ceph-deploy purge and ceph-deploy purgedata
[7:37] <dmick> various maps are stored on all of them; see that /var/lib/ceph is gone
[7:38] <MACscr> yep, its empty on all systems involved
[7:38] <dmick> can't be any record of the cluster then
[7:39] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[7:39] <MACscr> what about the disks themselves that were created?
[7:40] <dmick> there might be data left there but it won't be a member of any new cluster you define
[7:44] <MACscr> dmick: i have no data that i have created on them, i just want to make sure there isnt anything that could cause any problems
[7:51] * joef1 (~Adium@c-67-188-220-98.hsd1.ca.comcast.net) has joined #ceph
[7:51] * joef1 (~Adium@c-67-188-220-98.hsd1.ca.comcast.net) Quit ()
[7:56] <MACscr> dmick: think i should be safe to try this again? any last advice?
[7:57] <dmick> nope
[7:57] <kraken> http://i.imgur.com/iSm1aZu.gif
[7:59] <wenjunh> when I add a subuser to Ceph it always fails to write the related data to .users.swift, any knows why?
[7:59] <wenjunh> anyone knows why?
[8:03] <wenjunh> when I looks up the user info ???sudo radosgw-admin user info --uid=testuser??? it will shows the subuser item, but I can not add a swift key, neither can I find any item from ???sudo rados -p .users.swift ls???
[8:04] * shang (~ShangWu@175.41.48.77) has joined #ceph
[8:12] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[8:13] * zhaochao (~zhaochao@123.151.134.228) has left #ceph
[8:19] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[8:21] * zhaochao (~zhaochao@123.151.134.228) has joined #ceph
[8:25] * shang (~ShangWu@175.41.48.77) has joined #ceph
[8:28] <MACscr> dmick: ok, so i have the monitors created and pushed the default conf to the nodes http://pastie.org/pastes/9342479/text?key=5c8qid8k8kc6vkkp8ay8gq
[8:28] <MACscr> as you can see, it shows the old cluster disks. Should i just zap and create them for the new cluster?
[8:29] <MACscr> sorry for the 20 questions, just trying to make sure i dont make any mistakes so that i have a solid working default solution as I have had to reinstall and try this out way to many times already
[8:33] * aldavud (~aldavud@213.55.176.203) has joined #ceph
[8:36] * imriz (~imriz@82.81.163.130) has joined #ceph
[8:37] * shang (~ShangWu@175.41.48.77) Quit (Remote host closed the connection)
[8:41] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[8:42] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[8:44] * Sysadmin88 (~IceChat77@94.4.20.0) Quit (Quit: Easy as 3.14159265358979323846... )
[8:47] * madkiss (~madkiss@p549FC036.dip0.t-ipconnect.de) has joined #ceph
[8:48] * jtaguinerd1 (~jtaguiner@103.14.60.184) Quit (Read error: Connection reset by peer)
[8:48] * jtaguinerd (~jtaguiner@103.14.60.184) has joined #ceph
[8:51] * DavidThunder (~Thunderbi@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[8:53] * zidarsk8 (~zidar@2001:1470:fffd:101c:ea11:32ff:fe9a:870) has joined #ceph
[8:54] * zidarsk8 (~zidar@2001:1470:fffd:101c:ea11:32ff:fe9a:870) has left #ceph
[8:57] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[8:58] * jtaguinerd (~jtaguiner@103.14.60.184) Quit (Read error: Connection reset by peer)
[8:59] * jtaguinerd (~jtaguiner@103.14.60.184) has joined #ceph
[9:00] * madkiss (~madkiss@p549FC036.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[9:00] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has joined #ceph
[9:01] * Hell_Fire__ (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (Remote host closed the connection)
[9:01] * Hell_Fire__ (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[9:02] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[9:05] * DavidThunder (~Thunderbi@ip68-4-173-198.oc.oc.cox.net) Quit (Quit: DavidThunder)
[9:13] <MACscr> hmm, ive think i got a basic install working. woo hoo =P
[9:13] * scuttlemonkey is now known as scuttle|afk
[9:25] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[9:26] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[9:27] * rendar (~I@87.19.176.217) has joined #ceph
[9:33] * michalefty (~micha@p20030071CF4F7F002CD61BE329140855.dip0.t-ipconnect.de) has joined #ceph
[9:33] * aldavud (~aldavud@213.55.176.203) Quit (Ping timeout: 480 seconds)
[9:33] * michalefty (~micha@p20030071CF4F7F002CD61BE329140855.dip0.t-ipconnect.de) has left #ceph
[9:34] * rdas (~rdas@121.244.87.115) Quit (Ping timeout: 480 seconds)
[9:36] * analbeard (~shw@support.memset.com) has joined #ceph
[9:39] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[9:39] * lucas1 (~Thunderbi@222.247.57.50) Quit (Quit: lucas1)
[9:41] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[9:41] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) Quit (Read error: Connection reset by peer)
[9:44] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) has joined #ceph
[9:48] * zhaochao (~zhaochao@123.151.134.228) Quit (Ping timeout: 480 seconds)
[9:50] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[9:51] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[9:51] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[9:52] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[9:52] * ChanServ sets mode +v andreask
[9:53] * dignus (~jkooijman@53520F05.cm-6-3a.dynamic.ziggo.nl) Quit (Quit: leaving)
[9:54] * ade (~abradshaw@193.202.255.218) has joined #ceph
[9:55] <pressureman> how far off is firefly 0.80.2? the "set_extsize: FSSETXATTR: (22) Invalid argument" errors in my OSD logs are really starting to choke up the logs
[9:57] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:58] <Gugge-47527> pressureman: it was announced 3 days ago
[9:58] * fsimonce (~simon@host27-60-dynamic.26-79-r.retail.telecomitalia.it) has joined #ceph
[9:58] <pressureman> Gugge-47527, nope, not 0.82.... 0.80.2. i'm sticking to stable releases
[9:59] * cronix1 (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[9:59] <pressureman> 0.82 is a development release
[9:59] <Gugge-47527> ahh yes
[10:00] <Gugge-47527> 0.80.2 is too much like 0.82 apparently :P
[10:00] * fdmanana (~fdmanana@bl5-3-231.dsl.telepac.pt) has joined #ceph
[10:01] * jordanP (~jordan@185.23.92.11) has joined #ceph
[10:02] * madkiss (~madkiss@p549FC036.dip0.t-ipconnect.de) has joined #ceph
[10:15] * dignus (~jkooijman@t-x.dignus.nl) has joined #ceph
[10:15] * lucas1 (~Thunderbi@222.240.148.154) has joined #ceph
[10:20] * thb (~me@0001bd58.user.oftc.net) has joined #ceph
[10:30] * shang (~ShangWu@175.41.48.77) has joined #ceph
[10:30] * LeaChim (~LeaChim@host86-161-90-122.range86-161.btcentralplus.com) has joined #ceph
[10:37] * shang_ (~ShangWu@175.41.48.77) has joined #ceph
[10:38] * shang_ (~ShangWu@175.41.48.77) Quit (Remote host closed the connection)
[10:38] * shang_ (~ShangWu@175.41.48.77) has joined #ceph
[10:40] * shang (~ShangWu@175.41.48.77) Quit (Read error: Operation timed out)
[10:41] * zidarsk8 (~zidar@2001:1470:fffd:101c:ea11:32ff:fe9a:870) has joined #ceph
[10:44] * zidarsk8 (~zidar@2001:1470:fffd:101c:ea11:32ff:fe9a:870) Quit ()
[10:44] * zidarsk8 (~zidar@2001:1470:fffd:101c:ea11:32ff:fe9a:870) has joined #ceph
[10:50] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) Quit (Quit: Leaving.)
[10:51] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[10:51] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:52] * andreask (~andreask@zid-vpnn101.uibk.ac.at) has joined #ceph
[10:52] * ChanServ sets mode +v andreask
[10:54] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[10:55] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:59] * zhaochao (~zhaochao@106.38.204.72) has joined #ceph
[11:02] * yanzheng (~zhyan@jfdmzpr02-ext.jf.intel.com) Quit (Remote host closed the connection)
[11:04] * wenjunh_ (~wenjunh@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[11:08] * steveeJ (~junky@client156.amh.kn.studentenwohnheim-bw.de) has joined #ceph
[11:09] * wenjunh (~wenjunh@corp-nat.peking.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[11:09] * wenjunh_ is now known as wenjunh
[11:11] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) Quit (Read error: Operation timed out)
[11:13] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[11:14] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[11:17] * andreask (~andreask@zid-vpnn101.uibk.ac.at) Quit (Ping timeout: 480 seconds)
[11:18] * circ-user-e87qS (~circuser-@2001:1458:202:180::101:f6c7) has joined #ceph
[11:18] * circ-user-e87qS (~circuser-@2001:1458:202:180::101:f6c7) has left #ceph
[11:19] * [fred] (fred@earthli.ng) Quit (Ping timeout: 480 seconds)
[11:22] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[11:24] * dvanders (~dvanders@pb-d-128-141-237-218.cern.ch) Quit (Quit: dvanders)
[11:24] * dvanders (~circuser-@2001:1458:202:180::101:f6c7) has joined #ceph
[11:26] * zhaochao (~zhaochao@106.38.204.72) Quit (Ping timeout: 480 seconds)
[11:31] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[11:31] * ChanServ sets mode +v andreask
[11:31] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[11:33] * brambles (~xymox@s0.barwen.ch) Quit (Remote host closed the connection)
[11:34] * brambles (~xymox@s0.barwen.ch) has joined #ceph
[11:34] * zidarsk8 (~zidar@2001:1470:fffd:101c:ea11:32ff:fe9a:870) Quit (Quit: Leaving.)
[11:34] * zhaochao (~zhaochao@106.38.204.67) has joined #ceph
[11:39] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:43] * dvanders_ (~dvanders@2001:1458:202:180::102:f6c7) has joined #ceph
[11:44] * zhaochao (~zhaochao@106.38.204.67) Quit (Ping timeout: 480 seconds)
[11:47] * lucas1 (~Thunderbi@222.240.148.154) Quit (Quit: lucas1)
[11:47] * dvanders (~circuser-@2001:1458:202:180::101:f6c7) Quit (Ping timeout: 480 seconds)
[11:48] * raso (~raso@deb-multimedia.org) Quit (Quit: WeeChat 0.4.3)
[11:49] * raso (~raso@deb-multimedia.org) has joined #ceph
[12:07] * dvanders_ (~dvanders@2001:1458:202:180::102:f6c7) has left #ceph
[12:07] * dvanders (~dvanders@2001:1458:202:180::102:f6c7) has joined #ceph
[12:09] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[12:09] * bitserker (~toni@213.229.187.104) has joined #ceph
[12:10] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[12:14] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:15] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[12:16] * cronix1 (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[12:19] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) Quit (Quit: Textual IRC Client: www.textualapp.com)
[12:24] * nljmo (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) has joined #ceph
[12:42] * rongze (~rongze@114.54.30.94) Quit (Remote host closed the connection)
[12:47] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[12:47] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[12:47] * leseb (~leseb@185.21.174.206) has joined #ceph
[12:55] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[12:55] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Quit: Leaving...)
[12:57] * chrisjones (~chrisjone@vpngaf.ccur.com) has joined #ceph
[13:01] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[13:01] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit ()
[13:03] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[13:04] * dmit2k (~Adium@balticom-131-176.balticom.lv) has joined #ceph
[13:07] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[13:09] * beardo_ (~sma310@beardo.cc.lehigh.edu) has joined #ceph
[13:10] * pploegaert (~philippe@85.255.197.126) has joined #ceph
[13:14] * pploegaert (~philippe@85.255.197.126) has left #ceph
[13:17] * beardo (~sma310@beardo.cc.lehigh.edu) Quit (Ping timeout: 480 seconds)
[13:17] * shang_ (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[13:18] * huangjun (~kvirc@111.174.239.37) Quit (Ping timeout: 480 seconds)
[13:23] * wenjunh (~wenjunh@vpn-nat.peking.corp.yahoo.com) Quit (Quit: wenjunh)
[13:26] * zidarsk8 (~zidar@2001:1470:fffd:c000:ea11:32ff:fe9a:870) has joined #ceph
[13:26] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[13:27] * zidarsk8 (~zidar@2001:1470:fffd:c000:ea11:32ff:fe9a:870) has left #ceph
[13:27] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:28] * fdmanana (~fdmanana@bl5-3-231.dsl.telepac.pt) Quit (Quit: Leaving)
[13:32] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[13:32] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[13:32] * ChanServ sets mode +v andreask
[13:32] * BManojlovic (~steki@91.195.39.5) Quit ()
[13:33] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[13:33] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[13:37] * rwheeler (~rwheeler@smb-rsycl-04.wifihubtelecom.net) has joined #ceph
[13:37] * jordanP (~jordan@185.23.92.11) Quit (Ping timeout: 480 seconds)
[13:39] * Kedsta (~ked@cpc6-pool14-2-0-cust202.15-1.cable.virginm.net) has joined #ceph
[13:39] * bitserker (~toni@213.229.187.104) Quit (Ping timeout: 480 seconds)
[13:39] * keds (Ked@cpc6-pool14-2-0-cust202.15-1.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[13:41] * rwheeler (~rwheeler@smb-rsycl-04.wifihubtelecom.net) Quit ()
[13:41] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[13:41] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[13:41] * jordanP (~jordan@185.23.92.11) has joined #ceph
[13:44] * Guest40 (~coyo@thinks.outside.theb0x.org) Quit (Remote host closed the connection)
[13:45] * Coyo (~coyo@209.148.95.237) has joined #ceph
[13:45] * Coyo is now known as Guest243
[13:45] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[13:51] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[14:01] * fdmanana (~fdmanana@bl5-3-231.dsl.telepac.pt) has joined #ceph
[14:01] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[14:05] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[14:07] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[14:09] * lalatenduM_ (~lalatendu@nat-pool-blr-t.redhat.com) has joined #ceph
[14:10] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:10] * lala__ (~lalatendu@121.244.87.117) has joined #ceph
[14:14] * lalatenduM (~lalatendu@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:14] * lalatenduM_ (~lalatendu@nat-pool-blr-t.redhat.com) Quit (Read error: Operation timed out)
[14:16] * markbby (~Adium@168.94.245.3) has joined #ceph
[14:18] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[14:22] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[14:28] * cronix1 (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[14:31] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[14:38] * jtaguinerd (~jtaguiner@103.14.60.184) Quit (Quit: Leaving.)
[14:44] * rwheeler (~rwheeler@smb-rsycl-04.wifihubtelecom.net) has joined #ceph
[14:44] * BManojlovic (~steki@91.195.39.5) Quit (Remote host closed the connection)
[14:48] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[14:49] <ssejourne> hi. I create a fresh cluster on an ubuntu 14.04. the cluster is unhealthy until I set the tunable profile to legacy...
[14:50] <ssejourne> I have 4 osd on the same host and I set "osd_crush_chooseleaf_type = 0" in ceph.conf
[14:52] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[14:52] <ssejourne> when the cluster is not healthy, all pgs are only on osd.0
[14:53] * markbby (~Adium@168.94.245.3) has joined #ceph
[14:54] <ssejourne> ceph version 0.80.1
[14:54] <ssejourne> any idea why I have to use legacy tunable?
[14:59] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Remote host closed the connection)
[15:00] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[15:01] * jrankin (~jrankin@d47-69-66-231.try.wideopenwest.com) has joined #ceph
[15:03] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[15:04] * thomnico (~thomnico@wmh38-5-78-223-116-113.fbx.proxad.net) has joined #ceph
[15:07] * cronix1 (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[15:09] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[15:14] * michalefty (~micha@p20030071CF5E28002CD61BE329140855.dip0.t-ipconnect.de) has joined #ceph
[15:20] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[15:25] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[15:25] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[15:27] <magicrobotmonkey> is anyone using the dmcrypt option with ceph?
[15:27] <magicrobotmonkey> can you pass it to ceph-disk directly or only to ceph-deploy?
[15:30] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[15:32] * vbellur (~vijay@122.172.246.244) Quit (Ping timeout: 480 seconds)
[15:35] * michalefty (~micha@p20030071CF5E28002CD61BE329140855.dip0.t-ipconnect.de) has left #ceph
[15:38] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[15:43] * vbellur (~vijay@122.167.203.72) has joined #ceph
[15:45] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) has joined #ceph
[15:47] * gregsfortytwo1 (~Adium@cpe-107-184-64-126.socal.res.rr.com) Quit ()
[15:50] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[15:54] * rturk|afk is now known as rturk
[15:54] * osier (~osier@125.33.124.148) has joined #ceph
[15:56] <joelio> ssejourne: what kernel version?
[15:56] <joelio> you shouldn't need to set legacy on 14.04 fwiw
[15:57] <joelio> sert optimal
[15:57] <joelio> 4 osd on the same host is more problematic than that
[15:58] <joelio> if that's the only OSD host of course
[16:02] * scuttle|afk is now known as scuttlemonkey
[16:04] <pressureman> does anyone know if megaraid adapters support writeback cache in JBOD mode? i've heard various rumours that they don't
[16:04] * osier (~osier@125.33.124.148) Quit (Quit: Leaving)
[16:05] * Pedras (~Adium@50.185.218.255) has joined #ceph
[16:06] * Seon (~Seon@125.33.124.148) has joined #ceph
[16:06] * Pedras (~Adium@50.185.218.255) Quit ()
[16:06] * Pedras (~Adium@216.207.42.140) has joined #ceph
[16:08] <Seon> hi, unfortunately our devops changed monitor's IP address in /etc/ceph.conf with scripts after refactoring the network, and now it seems ceph doesn't work at all, I have no way to get the monitor map, the command just returns something like:
[16:08] <Seon> pipe(0x7fde180008c0 sd=3 :0 s=1 pgs=0 cs=0 l=1 c=0x7fde18000e20).fault
[16:08] <darkfader> let him fix it so he learns a bit about how to not nuke out prod systems :>
[16:08] <Seon> anyone could give some guide? how can I get it fixed? or it's completely unrecoverable? thanks a lot!
[16:09] <gleam> pressureman: I believe not, I think you'd have to set them up as 1-drive raid 0s
[16:09] <cookednoodles> Seon, use hosts :/
[16:10] <Seon> guys, it's not fun...
[16:14] <pressureman> gleam, so i'm guessing that LSI's less intelligent HBAs also probably don't support WB cache?
[16:15] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[16:15] * ChanServ sets mode +v andreask
[16:16] <gleam> i'd imagine stuff like the 9207/9211 doesn't have any onboard cache, and it's up to the drives to have wb on/off
[16:19] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[16:19] * tracphil (~tracphil@130.14.71.217) has joined #ceph
[16:20] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Remote host closed the connection)
[16:20] <pressureman> gleam, hrmm, i suppose with modern drives having ~64 MB onboard cache, that may in fact be better than 512 MB shared cache on a controller
[16:21] <gleam> yeah, but the idea with controller cache is you have a bbu so you can have wb on the controller on safely
[16:21] <gleam> with drives you don't have that luxury
[16:21] <pressureman> although if the data is replicated on other OSDs, does it really matter?
[16:22] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[16:22] * ChanServ sets mode +v andreask
[16:22] <gleam> depends on your failure domain i guess? if you know whatever group (rack, row, datacenter, whatever) all three OSDs are in won't fail then you're fine
[16:22] <gleam> but if they're all in one rack and you lose power to the rack..
[16:22] <pressureman> yep, point taken
[16:23] * gregmark (~Adium@cet-nat-254.ndceast.pa.bo.comcast.net) has joined #ceph
[16:23] <absynth> by the way, don't take "datacenter" to seriously as an OSD grouping metric, it#s rather "room"
[16:24] <gleam> yeah, that's sort of what i mean
[16:24] <pressureman> i use the crush "room" bucket type to represent my firezones ;-)
[16:24] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[16:25] * daniel (~daniel@soho73-234.sohonet.co.uk) has joined #ceph
[16:25] * daniel is now known as Guest266
[16:25] <pressureman> anyone using LSI cachecade on ceph? it's a licensed feature that allows you to use dedicated SSDs as WB cache
[16:25] <absynth> yeah, we use it
[16:25] * jmlowe (~Adium@2601:d:a800:511:f06a:a63b:e831:27fb) has joined #ceph
[16:26] <pressureman> what's your experience with it?
[16:26] * ikrstic (~ikrstic@c82-214-88-26.loc.akton.net) Quit (Quit: Konversation terminated!)
[16:26] <absynth> you better have BBU.
[16:26] <pressureman> of course
[16:26] <jmlowe> Quick question about the debian dumpling repos, are they supposed to be up to 0.67.9 for all releases?
[16:26] <absynth> we think it gives a nice speedup
[16:27] <pressureman> does it still make sense to put journals on separate drive (i.e. SSD) if using cachecade?
[16:27] <absynth> i seem to remember nhm ran benchmarks on it
[16:27] <absynth> yeah, we think it does make sense. don't ask for the rationale - i don't have it handy right now
[16:27] * [fred] (fred@earthli.ng) has joined #ceph
[16:28] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Read error: Operation timed out)
[16:28] <pressureman> absynth, i'm going to have a bunch of this megaraid hardware to do some testing / benchmarking on shortly
[16:29] <pressureman> i'm seesawing between "thrown money at it, make it go fast" and "prove that ceph can squeeze performance out of commodity hardware, if scaled horizontally"
[16:30] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[16:32] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[16:32] <jmlowe> It looks like 0.67.9 packages were built for ubuntu 12.04-14.04 but the Packages files were only updated for precise,quantal,raring while saucy and trusty were neglected and are stuck at 0.67.7
[16:32] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[16:33] <jmlowe> any inktank'ers around and care to comment?
[16:33] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[16:35] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[16:37] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[16:40] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Quit: Leaving.)
[16:40] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[16:40] * ChanServ sets mode +v andreask
[16:41] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[16:43] <absynth> pressureman: i don't think that the key to performance is horizontal scaling
[16:43] <absynth> it might be if you're willing to invest in a replica count of 4 or more
[16:43] <absynth> we have made the experience that any host in your setup which is subpar to the other hosts will drag down the whole cluster
[16:44] <absynth> so, basically, if you have one machine without cachecade and a slightly slower controller, your whole cluster will suffer - mostly under rebalancing conditions, but maybe even in other times of high (production) load
[16:45] <pressureman> yes, i can believe that
[16:45] <pressureman> herd of buffalo only moves as fast as the slowest buffalo ;-)
[16:46] <absynth> moo.
[16:46] <pressureman> well my company seems to like buying these megaraid controllers and cachecade licenses
[16:46] * Seon (~Seon@125.33.124.148) Quit (Ping timeout: 480 seconds)
[16:46] <pressureman> so i suppose i should make the most of it
[16:46] <absynth> then go at it
[16:47] * sz0_ (~sz0@46.197.48.116) Quit ()
[16:50] <magicrobotmonkey> is it possible for a collocated journal to overflow and overwrite partitions
[16:51] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[16:52] <ssejourne> joelio: it's just a test platform (vagrant). that's why it's 4 osd on the same server. Kernel is version 3.13.0
[16:53] <ssejourne> only legacy tunables seems to make the cluster healthy. if I use default or optimal profiles, I have pg problems
[16:54] <Guest266> Hi - beginner to Ceph, and I've been going through the 3 node tutorial... Should I be using firefly? I've run into problems with both Ubuntu 14.4 and 12.4 and don't know if it is the build or the docs are out of date...
[16:55] <Guest266> For example creating additional monitors gives errors
[16:57] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[17:01] <joelio> ssejourne: I'd hazard a guess to say it's due to only one osd host.. try and fire up another vagrant box with an osd to check ;)
[17:01] * joelio uses trusty with optimal
[17:01] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:01] * mrjack (mrjack@office.smart-weblications.net) Quit (Ping timeout: 480 seconds)
[17:01] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[17:02] * ade (~abradshaw@193.202.255.218) Quit (Ping timeout: 480 seconds)
[17:02] * JCL (~JCL@2601:9:5980:39b:bcf1:8f20:1986:d04d) Quit (Quit: Leaving.)
[17:03] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[17:03] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:05] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[17:06] <ssejourne> right. but I'm just curious as I have " osd_crush_chooseleaf_type = 0" in my ceph.conf to replicate by default between osd and not host
[17:06] * JCL (~JCL@c-24-23-166-139.hsd1.ca.comcast.net) has joined #ceph
[17:10] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[17:10] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[17:11] * rturk is now known as rturk|afk
[17:12] * rwheeler (~rwheeler@smb-rsycl-04.wifihubtelecom.net) Quit (Quit: Leaving)
[17:17] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) has joined #ceph
[17:18] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:19] * Seon (~Seon@221.219.110.241) has joined #ceph
[17:21] <baylight> I'm reading about Ceph replication and it makes me wonder - when a PG is degraded because the primary is down (but not out), is the cluster able to write to that PG in the 5 minute window before the OSD is marked out and crush picks a new master?
[17:23] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[17:24] <tnt_> Mmmm, I just killed an OSD (delete its data dir), then recreated one with "ceph-osd -c /etc/ceph/ceph.conf -i 3 --mkfs", restored its keyring and then started it again. It starts fine but doesn't seem to be doing anything. It doesn't even appear as "up".
[17:25] * WF (~oftc-webi@ip70-185-97-72.ga.at.cox.net) has joined #ceph
[17:26] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Read error: Operation timed out)
[17:26] * WF (~oftc-webi@ip70-185-97-72.ga.at.cox.net) Quit ()
[17:27] * KB (~oftc-webi@cpe-74-137-224-213.swo.res.rr.com) has joined #ceph
[17:27] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[17:27] * MLM (~oftc-webi@fl-67-235-133-230.dhcp.embarqhsd.net) has joined #ceph
[17:27] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[17:28] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[17:28] * joelio (~Joel@88.198.107.214) Quit (Ping timeout: 480 seconds)
[17:29] * markbby (~Adium@168.94.245.3) has joined #ceph
[17:29] * MLM (~oftc-webi@fl-67-235-133-230.dhcp.embarqhsd.net) has left #ceph
[17:29] * shang_ (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[17:30] * narb (~Jeff@38.99.52.10) has joined #ceph
[17:33] <magicrobotmonkey> anyone know if i can get 0.82 packages anywhere yet?
[17:33] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:34] * jtaguinerd (~jtaguiner@125.212.121.220) has joined #ceph
[17:34] <jmlowe> baylight: yes, it writes to the secondary and the pg versions change so when the primary comes back on line it updates from the secondary to the latest version and takes over
[17:35] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Ping timeout: 480 seconds)
[17:36] <tnt_> Anyone ? Any idea why my OSD is not registering as up ?
[17:36] * madkiss (~madkiss@p549FC036.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[17:37] <jtaguinerd> Hi Guys, our pg_num is 1200 while our pgp_num is only 64. We are thinking of increasing the pgp_num the same number as the pg_num, but we have nearfull OSD. Is this going to make things worst for us or not?
[17:38] * ircolle-away is now known as ircolle
[17:40] * rweeks (~rweeks@c-24-6-118-113.hsd1.ca.comcast.net) has joined #ceph
[17:40] * ade (~abradshaw@80-72-52-57.cmts.powersurf.li) has joined #ceph
[17:41] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[17:42] <jtaguinerd> @tnt, have you tried checking the logs of that particular OSD?
[17:42] <cephalobot> jtaguinerd: Error: "tnt," is not a valid command.
[17:45] <bens> @help
[17:45] <cephalobot> bens: (help [<plugin>] [<command>]) -- This command gives a useful description of what <command> does. <plugin> is only necessary if the command is in more than one plugin.
[17:45] <bens> this should be fun.
[17:45] <tnt_> jtaguinerd: yes, nothing abnormal. http://pastebin.com/TPzNth6P
[17:46] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[17:46] <tnt_> jtaguinerd: a netstat even shows an active tcp connection to one of the mon.
[17:46] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[17:46] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[17:47] * lala__ (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[17:48] * imriz (~imriz@82.81.163.130) Quit (Ping timeout: 480 seconds)
[17:49] <jtaguinerd> tnt: can you paste the result of ceph osd tree
[17:50] <tnt_> jtaguinerd: http://pastebin.com/ja48N0YN
[17:51] * funnel (~funnel@23.226.237.192) Quit (Read error: Connection reset by peer)
[17:52] <absynth> only now did i connect a name to tnt_'s nick
[17:52] <absynth> jeez, i'm slow
[17:52] * b0e (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[17:52] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[17:54] <jtaguinerd> tnt: what happens if you try to restart ceph-osd id=3
[17:54] * shang_ (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Quit: Ex-Chat)
[17:54] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) has joined #ceph
[17:54] <tnt_> jtaguinerd: the log I posted above. I tried restarting in a few times already. Nothing appears in the ceph -w or the ceph mon logs.
[17:55] <tnt_> jtaguinerd: The cluster has noout set currently. And ceph-osd-3 was flagged as 'full' prior to its demise, but that shouldn't matter.
[17:58] <tnt_> absynth: Do we know each other ?
[17:59] <absynth> nah, i read your crosspost to the mailing list
[17:59] <jtaguinerd> tnt try removint the noout
[18:00] <jtaguinerd> *removing
[18:01] <tnt_> Well, it backfill but that's really not what I wanted ...
[18:01] <tnt_> because it doesn't help osd.3 to come up and there won't be the space on osd.[012] to replicate ...
[18:04] * jtaguinerd1 (~jtaguiner@112.198.82.144) has joined #ceph
[18:05] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:05] * JuanEpstein (~rweeks@c-24-6-118-113.hsd1.ca.comcast.net) has joined #ceph
[18:05] * rweeks is now known as Guest285
[18:05] * JuanEpstein is now known as rweeks
[18:07] * madkiss (~madkiss@p5099fdaa.dip0.t-ipconnect.de) has joined #ceph
[18:08] * jtaguinerd (~jtaguiner@125.212.121.220) Quit (Read error: Operation timed out)
[18:08] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[18:11] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[18:12] * sarob_ (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[18:14] * jordanP (~jordan@185.23.92.11) Quit (Quit: Leaving)
[18:18] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[18:19] * sarob_ (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[18:20] * colonD (~colonD@173-165-224-105-minnesota.hfc.comcastbusiness.net) has joined #ceph
[18:21] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[18:21] * ade (~abradshaw@80-72-52-57.cmts.powersurf.li) Quit (Ping timeout: 480 seconds)
[18:21] * jrankin (~jrankin@d47-69-66-231.try.wideopenwest.com) Quit (Quit: Leaving)
[18:22] * rektide (~rektide@eldergods.com) Quit (Read error: Network is unreachable)
[18:24] * rturk|afk is now known as rturk
[18:24] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (Quit: Leaving.)
[18:28] * dmit2k (~Adium@balticom-131-176.balticom.lv) Quit (Quit: Leaving.)
[18:29] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[18:30] * rektide (~rektide@eldergods.com) has joined #ceph
[18:30] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[18:31] * davidzlap (~Adium@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[18:31] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[18:35] * DavidThunder (~Thunderbi@ip68-4-173-198.oc.oc.cox.net) has joined #ceph
[18:36] * dmit2k (~Adium@balticom-131-176.balticom.lv) has joined #ceph
[18:37] * ScOut3R_ (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Read error: Operation timed out)
[18:37] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[18:37] * thomnico (~thomnico@wmh38-5-78-223-116-113.fbx.proxad.net) Quit (Quit: Ex-Chat)
[18:38] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Read error: Connection reset by peer)
[18:39] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[18:41] * jtaguinerd1 (~jtaguiner@112.198.82.144) Quit (Ping timeout: 480 seconds)
[18:42] * madkiss (~madkiss@p5099fdaa.dip0.t-ipconnect.de) Quit (Quit: Leaving.)
[18:43] * lpabon (~lpabon@nat-pool-bos-t.redhat.com) Quit (Quit: ZNC - http://znc.in)
[18:44] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Read error: Connection reset by peer)
[18:44] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[18:46] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Operation timed out)
[18:46] * lpabon (~lpabon@nat-pool-bos-t.redhat.com) has joined #ceph
[18:47] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[18:48] * dignus (~jkooijman@t-x.dignus.nl) Quit (Read error: Operation timed out)
[18:50] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Read error: Connection reset by peer)
[18:55] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[18:56] * rektide (~rektide@eldergods.com) Quit (Read error: Connection reset by peer)
[18:58] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Read error: Connection reset by peer)
[18:58] * rektide (~rektide@eldergods.com) has joined #ceph
[18:59] * xarses (~andreww@12.164.168.117) has joined #ceph
[18:59] * gregsfortytwo (~Adium@2607:f298:a:607:40c8:3887:435f:7674) Quit (Quit: Leaving.)
[19:00] * gregsfortytwo (~Adium@38.122.20.226) has joined #ceph
[19:01] * sjusthm (~sam@66-214-251-229.dhcp.gldl.ca.charter.com) has joined #ceph
[19:07] * shang (~ShangWu@220-135-203-169.HINET-IP.hinet.net) Quit (Quit: Ex-Chat)
[19:07] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[19:07] * Cube (~Cube@12.248.40.138) has joined #ceph
[19:09] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[19:09] * dignus (~jkooijman@t-x.dignus.nl) has joined #ceph
[19:12] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) has joined #ceph
[19:14] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[19:19] * sarob (~sarob@c-76-102-72-171.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[19:19] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[19:21] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[19:24] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[19:25] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Read error: Connection reset by peer)
[19:27] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:30] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[19:30] * jcsp (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[19:33] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[19:34] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[19:35] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[19:35] * joef (~Adium@2620:79:0:131:e0fc:5391:9471:b51a) Quit (Remote host closed the connection)
[19:36] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[19:37] * rektide (~rektide@eldergods.com) Quit (Read error: Operation timed out)
[19:40] * kitz (~kitz@admin163-7.hampshire.edu) has joined #ceph
[19:40] * joef (~Adium@138-72-131-163.pixar.com) has joined #ceph
[19:41] <bens> yo.
[19:41] <bens> ceph got added to epel a few months back.
[19:41] * rektide (~rektide@eldergods.com) has joined #ceph
[19:42] <bens> epel has firefly in it.
[19:42] * steveeJ (~junky@client156.amh.kn.studentenwohnheim-bw.de) Quit (Quit: Leaving)
[19:42] <bens> it is newer than the version in my ceph repo, which I called out as dumpling.
[19:42] <bens> this broke yum update.
[19:43] <bens> Well, it didn't break it, it just cause the ceph repo to be ignored.
[19:43] <bens> what's the new best practice for installing an older version
[19:43] <t0rn> http://ceph.com/rpm-dumpling/ for example
[19:45] * mwyatt (~mwyatt@137.152.127.16) has joined #ceph
[19:47] <kitz> If i'm trying to detect a single slow OSD (for writes) which admin socket perf counter should I be looking at? osd->op_[rw]_latency, filestore->journal->journal_latency, filestore->journal->commitcycle_latency, filestore->journal->apply_latency, filestore->journal->queue_transaction_latency_avg?
[19:47] <bens> t0rn: thats whats in my repo.
[19:48] <bens> the problem is that ceph has been added to the epel repo.
[19:48] <t0rn> oh i see. THen setup yum priorities.
[19:48] <bens> ok
[19:48] <bens> the docs need to be updated
[19:49] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[19:50] <bens> priorities are icky tho
[19:50] <mwyatt> Newbie here with a question for anyone: The docs give the (OSD*100)/repcount formula for calculating pg_num for a single pool. Should that be adjusted downware with multiple pools in the cluster or should every pool follow that rule?
[19:50] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[19:51] <bens> thats the total number of PGs across all pools
[19:51] <jmlowe> mwyatt: I believe that should be adjusted down for multiple pools
[19:51] <mwyatt> got it.
[19:51] <Anticimex> hmmm
[19:51] <jmlowe> mwyatt: you can always make more, but I don't think pg merging has been implemented
[19:52] <Anticimex> my gut feeling is that pg_num at decent number is what enables balanced placement of objects
[19:52] <mwyatt> that makes sense. so if that's the case, are there some guidelines I should consider when selecting my pg_num counts? Ie. block device vs object vs whatever?
[19:53] <mwyatt> Anticimex: that's my udnerstanding as well.
[19:53] <mwyatt> thanks all.
[19:54] <mwyatt> so my current test cluster is 4 hosts - 6 OSDs each. Formula yields 800 for pg_num so rounding up gives me 1024 pg_num to work with. I'm just toying with how to best distribute that among my pools now.
[19:54] <jmlowe> mwyatt: the more you have the more likely to operations won't hit on the same osd but the more pg's the more overhead there is for each osd
[19:55] <mwyatt> jmlowe: thanks. in terms of overhead we're talking CPU/mem on the host yes? I can monitor that to help balance things out.
[19:57] <mwyatt> I've still got lots of reading to do but thanks for filling in the gaps!
[19:57] <jmlowe> powers of 2 will distribute things more evenly, you can think of it as modulo(hash(object),number of pg's in pool)
[19:57] * bkopilov (~bkopilov@213.57.17.63) has joined #ceph
[19:58] <brad_mssw> is it possible to delete the default pools of 'data' and 'metadata' if not using cephfs?
[19:58] <jmlowe> mwyatt: I believe the overhead is most important when doing recovery
[19:58] <iggy> brad_mssw: yes (in fact I think newer versions don't even create them)
[19:58] <jmlowe> mwyatt: but it is memory first then cpu
[19:59] <brad_mssw> iggy: any particular steps necessary to do it? I get an error stating it is in use by cephfs (or something like that)
[19:59] <iggy> are you running any MDSes?
[19:59] <brad_mssw> iggy: but I don't even have an MDS server running
[19:59] <iggy> okay
[19:59] <iggy> what version of ceph?
[19:59] <brad_mssw> firefly
[20:00] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[20:00] <mwyatt> jmlowe: ahh ok, good point to remember on the recovery. I'll keep it at the nearest power of 2 and call it good. If I want to toy later that's different. Thanks!
[20:00] <iggy> then I got nothing... might wait for one of the devs
[20:00] * funnel (~funnel@0001c7d4.user.oftc.net) Quit (Read error: Connection reset by peer)
[20:00] * rweeks (~rweeks@c-24-6-118-113.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[20:00] <mwyatt> iggy/brad_mssw: just my 2 cents as well. I just installed firefly without and MDS. I have data/metadata as well.
[20:00] <brad_mssw> # ceph osd pool delete data data --yes-i-really-really-mean-it
[20:00] <brad_mssw> Error EBUSY: pool 'data' is in use by CephFS
[20:01] <iggy> okay, there was a mailing list post about it
[20:01] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[20:01] <iggy> there might be another step before that to say "I'm not using cephfs"
[20:02] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[20:03] * Guest285 (~rweeks@c-24-6-118-113.hsd1.ca.comcast.net) Quit (Read error: Operation timed out)
[20:04] * markbby (~Adium@168.94.245.3) has joined #ceph
[20:08] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[20:08] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[20:10] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[20:11] <Anticimex> there isn't any "local affinity" yet *within* a cluster, if i for example store 2 replicas in site a, and 1 replica in site b, right? to make reads from b hit b first, that is
[20:11] <Anticimex> (~10ms rtt between sites)
[20:13] <joshd1> Anticimex: there is at the librados level, or for rbd reading from snapshots/parent snapshots
[20:15] <Anticimex> yeah, i found a sage snia presentation and inktank product sheet so far
[20:15] * bjornar (~bjornar@ti0099a430-0158.bb.online.no) has joined #ceph
[20:15] <Anticimex> for me it appears i should just treat the sites as one and split it, but make CRUSH aware of the topology
[20:16] <Anticimex> in this particular home-use case
[20:20] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[20:25] <kitz> I think I have a couple slow journal SSDs which are slowing down my whole cluster. Which performance counters could I look at to help me verify this?
[20:26] * ircolle is now known as ircolle-away
[20:30] * Rogerio (~c8dd8032@redenorte.net) has joined #ceph
[20:30] * rweeks (~rweeks@192.169.20.75.static.etheric.net) has joined #ceph
[20:31] * chrisjones (~chrisjone@vpngaf.ccur.com) Quit (Remote host closed the connection)
[20:32] * baylight (~tbayly@69-195-66-4.unifiedlayer.com) Quit (Ping timeout: 480 seconds)
[20:32] * fmanana (~fdmanana@bl14-136-41.dsl.telepac.pt) has joined #ceph
[20:34] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[20:37] * fdmanana (~fdmanana@bl5-3-231.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[20:38] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) has joined #ceph
[20:38] * Rogerio (~c8dd8032@redenorte.net) Quit (Quit: (( WebIRC Gratuito www.webirc.com.br )))
[20:38] * Rogerio (~c8dd8032@redenorte.net) has joined #ceph
[20:39] * Rogerio (~c8dd8032@redenorte.net) has left #ceph
[20:41] * aldavud (~aldavud@213.55.184.163) has joined #ceph
[20:43] * Rogerio (~oftc-webi@200-221-128-50.corp.uolinc.com) has joined #ceph
[20:45] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[20:45] * ChanServ sets mode +v andreask
[20:46] * imriz (~imriz@5.29.200.177) has joined #ceph
[20:47] <Rogerio> Is there a way to use the ceph block storage like a SAN just like iSCSI?
[20:48] <bens> nope
[20:48] <kraken> http://i.imgur.com/ErtgS.gif
[20:48] <bens> 1 ops are blocked > 65.536 sec
[20:48] <bens> 1 ops are blocked > 65.536 sec on osd.6
[20:48] <bens> 65536
[20:48] <bens> seems like an unusual number.
[20:48] <bens> too coincidental.
[20:49] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[20:51] * rendar (~I@87.19.176.217) Quit (Read error: Operation timed out)
[20:51] * sigsegv (~sigsegv@188.25.120.56) has joined #ceph
[20:51] <gregsfortytwo> Rogerio: there's a partly-tested backend for tgt
[20:51] <gregsfortytwo> bens: dump the ops in flight and see what it's stuck on
[20:55] <kitz> does the osd->op_latency counter include the time for an OSD's peers to also write data or is it just for itself?
[20:56] * rendar (~I@87.19.176.217) has joined #ceph
[20:59] * tloveridge (~Adium@67.21.63.134) has joined #ceph
[21:01] * tloveridge (~Adium@67.21.63.134) has left #ceph
[21:01] * sarob (~sarob@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[21:01] * dignus (~jkooijman@t-x.dignus.nl) Quit (Read error: Operation timed out)
[21:03] * sz0 (~sz0@46.197.48.116) has joined #ceph
[21:09] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Read error: Connection reset by peer)
[21:09] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[21:12] * KaZeR (~kazer@64.201.252.132) Quit (Remote host closed the connection)
[21:16] * Rogerio (~oftc-webi@200-221-128-50.corp.uolinc.com) has left #ceph
[21:16] * kitz (~kitz@admin163-7.hampshire.edu) Quit (Quit: kitz)
[21:19] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[21:23] <gregsfortytwo> kitz: it's completion time for the Op as a whole, which includes peer writes
[21:25] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[21:25] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[21:27] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) Quit (Read error: Operation timed out)
[21:31] * rturk is now known as rturk|afk
[21:32] * redcavalier (~redcavali@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[21:35] * aldavud (~aldavud@213.55.184.163) Quit (Ping timeout: 480 seconds)
[21:36] * dignus (~jkooijman@t-x.dignus.nl) has joined #ceph
[21:37] <redcavalier> Hi guys, I've setup a Ceph cluster but now I'd want to test it thorughly and see what speed I can get and how redundant it is as a block device. I'd like to test it with something that can be setup easily and quickly. Any suggestion?
[21:41] <mwyatt> I second redcavalier's question. I'm going to connect Ceph up to an OpenStack test deployment but any good advice on test platforms would be interesting to hear!
[21:42] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) Quit (Remote host closed the connection)
[21:43] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) has joined #ceph
[21:48] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[21:48] * sarob (~sarob@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[21:48] * sarob (~sarob@2001:4998:effd:600:c584:6f3e:b058:963c) has joined #ceph
[21:49] * angdraug (~angdraug@12.164.168.117) Quit ()
[21:51] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:52] <Vacum_> redcavalier / mwyatt : have a look at "cosbench"
[21:53] <redcavalier> This is for object storage though. Not bad, but not exactly what I'm looking for.
[21:56] * sarob (~sarob@2001:4998:effd:600:c584:6f3e:b058:963c) Quit (Ping timeout: 480 seconds)
[21:57] <redcavalier> I've seen Ceph block devices used mostly with openstack. Would there be a way, for example, to hook up the Ceph cluster to a simple Xen hypervisor, or does it absolutely need a module like cinder?
[21:58] <gregsfortytwo> redcavalier: oh, you can use it with anything that uses qemu, although I can't help you with the exact commands to run
[21:58] <gregsfortytwo> or with many kernels, although it will have different performance characteristics than the userspace solutions
[21:58] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[21:58] <gregsfortytwo> (rbdmap)
[21:59] * tracphil (~tracphil@130.14.71.217) Quit (Quit: leaving)
[22:00] <redcavalier> I see, thx, I'll look into these options, see if there's anything I can setup quickly and somewhat painlessly.
[22:01] * sputnik13 (~sputnik13@207.8.121.241) Quit ()
[22:01] <t0rn> newer tgtd has rbd support as well
[22:02] <gregsfortytwo> redcavalier: they should be sufficiently documented if you look at ceph.com/docs
[22:02] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:03] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[22:03] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[22:04] <redcavalier> I had looked into the ceph doc, but not in the right spot I believe. Again, thanks for the suggestions.
[22:04] <rweeks> http://ceph.com/docs/master/rbd/qemu-rbd/ is where you should start
[22:05] <rweeks> http://ceph.com/docs/master/rbd/rbd-ko/ for kernel modules as opposed to QEMU
[22:05] * jmlowe (~Adium@2601:d:a800:511:f06a:a63b:e831:27fb) has left #ceph
[22:05] <rweeks> and http://ceph.com/docs/master/rbd/libvirt/ for libvert with QEMU
[22:06] <rweeks> that last one may be most pertinent to you
[22:06] <rweeks> also http://ceph.com/docs/master/rbd/rbd-cloudstack/
[22:06] <redcavalier> ok
[22:07] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[22:07] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[22:10] * sputnik13 (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:10] * leseb (~leseb@185.21.174.206) Quit (Killed (NickServ (Too many failed password attempts.)))
[22:10] * leseb (~leseb@185.21.174.206) has joined #ceph
[22:14] * sz0_ (~sz0@46.197.48.116) has joined #ceph
[22:14] * sz0_ (~sz0@46.197.48.116) Quit ()
[22:15] * sz0_ (~sz0@46.197.48.116) has joined #ceph
[22:19] * sarob (~sarob@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:19] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:21] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[22:23] * rturk|afk is now known as rturk
[22:24] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[22:24] * ChanServ sets mode +v andreask
[22:27] * sarob (~sarob@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[22:30] * beardo_ is now known as beardo
[22:31] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[22:34] <beardo> so I'm creating an erasure coded profile for an erasure pool in my cluster and I'm trying to come up with a value for k, assuming m=3. Since each of the k data chunks will be stored on a separate osd, it seems like I would want to make k as big as possible?
[22:34] <beardo> based on info here: http://karan-mj.blogspot.com/2014/04/erasure-coding-in-ceph.html
[22:35] * sarob (~sarob@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[22:35] <beardo> or maybe make k and m as big as possible to get the storage required value I'm shooting for?
[22:35] <gleam> a higher K while keeping M constant will save disk space, a higher M while keeping K constant will cost space but improve redundancy
[22:36] <gleam> a higher K + M combined will cost cpu cycles and slow down recovery and potentially writing
[22:36] * imriz (~imriz@5.29.200.177) Quit (Read error: Operation timed out)
[22:37] <gleam> http://dachary.org/?p=3042
[22:42] <beardo> gleam, thanks!
[22:50] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[22:54] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[22:58] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has left #ceph
[22:59] * Ackowa (~oftc-webi@c-7e6772d5.018-37-736b651.cust.bredbandsbolaget.se) has joined #ceph
[23:03] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[23:05] <Ackowa> hi, I'm trying to add a new OSD machine to my ceph cluster. Do I need to run ceph-deploy new <server> on it even if guide says it's for "initial-monitor-node" or is it only used if I want to deploy monitor nodes?
[23:10] * primechu_ (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[23:10] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[23:11] <gregsfortytwo> ceph-deploy new is for generating the initial seed information on a new cluster (the "initial monitor node)
[23:11] <gregsfortytwo> just do the OSD pieces
[23:11] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[23:11] <Ackowa> gregsfortytwo: so install should be enough?
[23:12] <gregsfortytwo> and the steps to add new OSDs, yeah
[23:12] <Ackowa> do I need to push config to new server as well, cause I get error when running gatherkeys
[23:15] <gregsfortytwo> yes???I believe these docs are pretty clear, so if you have an issue with any particular step lay out exactly what you did and what the response was
[23:15] <gregsfortytwo> but I don't have it in my head; I never deploy my own clusters
[23:16] <mwyatt> I don't believe you need to gatherkeys again. From my limited experience for a brand new node... setup passwordless ssh, copy sshid, make sure that user is setup in .ssh/config to be used.
[23:16] <mwyatt> then ceph-deploy install
[23:17] <mwyatt> ceph-deploy disk list to identify what you have, zap if needed, osd create (or prepare/activate)
[23:17] <mwyatt> that's all I did to get my new node online and in the cluster.
[23:17] <Ackowa> ok, thanks.
[23:18] <Ackowa> Probably need to go back and review exactly what I did and see if I can find where it starts going wrong then
[23:18] <mwyatt> yep. I'd have to go back to see if I had to manually copy ceph.conf.
[23:18] * sigsegv (~sigsegv@188.25.120.56) has left #ceph
[23:19] <Ackowa> should I see any running processes on server directly after install or are they not started until the osd is configured.
[23:21] * KaZeR (~kazer@c-67-161-64-186.hsd1.ca.comcast.net) has joined #ceph
[23:21] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[23:22] * rturk is now known as rturk|afk
[23:27] * john (~john@2601:9:6c80:7df:6dc5:84ee:2a7a:d707) has joined #ceph
[23:28] * nljmo_ (~nljmo@5ED6C263.cm-7-7d.dynamic.ziggo.nl) has joined #ceph
[23:31] * kevinc (~kevinc__@client65-78.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[23:31] * rturk|afk is now known as rturk
[23:32] <mwyatt> Ackowa: that I'm not sure on. I think nothing will be running until the OSD is configured.
[23:33] <Ackowa> mwyatt: ok, thanks
[23:34] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[23:34] * kevinc (~kevinc__@client65-78.sdsc.edu) has joined #ceph
[23:36] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[23:36] * KaZeR (~kazer@c-67-161-64-186.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[23:40] * Ackowa (~oftc-webi@c-7e6772d5.018-37-736b651.cust.bredbandsbolaget.se) Quit (Remote host closed the connection)
[23:41] * beardo_ (~sma310@216-164-125-67.c3-0.drf-ubr1.atw-drf.pa.cable.rcn.com) has joined #ceph
[23:46] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[23:46] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[23:47] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[23:49] * andreask (~andreask@zid-vpnn026.uibk.ac.at) has joined #ceph
[23:49] * ChanServ sets mode +v andreask
[23:51] * stewiem2000 (~stewiem20@195.10.250.233) Quit (Ping timeout: 480 seconds)
[23:52] * andreask (~andreask@zid-vpnn026.uibk.ac.at) has left #ceph
[23:58] * Rogerio (~oftc-webi@200-221-128-50.corp.uolinc.com) has joined #ceph
[23:58] * bjornar (~bjornar@ti0099a430-0158.bb.online.no) Quit (Ping timeout: 480 seconds)
[23:58] * rendar (~I@87.19.176.217) Quit ()
[23:59] * Sysadmin88 (~IceChat77@94.4.20.0) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.