#ceph IRC Log

Index

IRC Log for 2015-02-25

Timestamps are in GMT/BST.

[0:00] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[0:05] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[0:06] * scuttlemonkey is now known as scuttle|afk
[0:08] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[0:10] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[0:12] * alram_ (~alram@38.122.20.226) Quit (Quit: leaving)
[0:12] * xarses (~andreww@173-164-194-206-SFBA.hfc.comcastbusiness.net) has joined #ceph
[0:12] * togdon (~togdon@74.121.28.6) Quit (Quit: Textual IRC Client: www.textualapp.com)
[0:19] * avozza (~avozza@83.162.204.36) Quit (Remote host closed the connection)
[0:19] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:30] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[0:32] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Remote host closed the connection)
[0:34] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[0:42] * primechuck (~primechuc@host-95-2-129.infobunker.com) Quit (Ping timeout: 480 seconds)
[0:43] * vata (~vata@208.88.110.46) Quit (Quit: Leaving.)
[0:47] * moore (~moore@97-124-123-201.phnx.qwest.net) has joined #ceph
[0:47] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[0:54] <JoeJulian> "rados -p .rgw.root ls" lists two entries and hangs. How can I diagnose why?
[0:55] * moore (~moore@97-124-123-201.phnx.qwest.net) Quit (Ping timeout: 480 seconds)
[0:56] <badone> JoeJulian: strace/ltrace may give some ideas?
[0:57] <badone> JoeJulian: prolly need debuginfo loaded for ltrace
[0:57] <badone> JoeJulian: also...
[0:57] <badone> JoeJulian: ps axHo stat,pid,tid,pgid,ppid,comm,wchan
[0:58] <badone> JoeJulian: ^ what does that tell you about whether they are in d-state, what system call tehy are in, etc.?
[1:00] <JoeJulian> Nothing but a bunch of futex wait and poll schedule timeout
[1:01] <badone> JoeJulian: and the status is "S"
[1:01] <badone> ?
[1:01] <gregsfortytwo> JoeJulian: are all your PGs active?
[1:01] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:02] <JoeJulian> Hrm, no.
[1:03] <gregsfortytwo> that's why ;)
[1:03] <JoeJulian> So I need to add a check to see if there are any stale pgs before running this command. Stupid command should fail.
[1:03] * houkouonchi-work (~linux@2607:f298:b:635:225:90ff:fe39:38ce) Quit (Ping timeout: 480 seconds)
[1:03] <badone> gregsfortytwo: so it's waiting on a lock obviously?
[1:03] * debian1121 (~bcolbert@24.126.201.64) Quit (Quit: Leaving.)
[1:03] <gregsfortytwo> no
[1:03] <gregsfortytwo> it's waiting for a PG to become available so it can list the objects in it
[1:04] <gregsfortytwo> Ceph basically never fails an IO, it just waits around assuming the data will be available soon
[1:04] <badone> gregsfortytwo: but it appears not to be in d-state so is it pollling?
[1:04] <gregsfortytwo> well, internally the request is waiting on a condition variable and getting poked whenever a new osdmap comes along
[1:04] <gregsfortytwo> and those are getting pushed to it by the monitors
[1:05] <badone> gregsfortytwo: ahh, got you
[1:05] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[1:05] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[1:08] <badone> JoeJulian: nie to see you over here. Seen plenty of you in gluster community of course, your contribution there is legendary :)
[1:09] <JoeJulian> Hehe, thanks. I just BS a lot. ;)
[1:09] <badone> haha :)
[1:09] * ChrisNBl_ (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[1:10] * dmsimard is now known as dmsimard_away
[1:13] * danieagle (~Daniel@200-148-39-11.dsl.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[1:16] <flaf> Sorry, I have another question about osd journal as block devices.
[1:18] <flaf> So I can use just a symlink /var/lib/ceph/osd/$cluster-$id/journal which redirect to my block devices /dev/sdb1 (for instance)
[1:18] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: Lost terminal)
[1:18] * gregmark (~Adium@68.87.42.115) Quit (Quit: Leaving.)
[1:19] <flaf> but I don't like path like /dev/sdb1 because it could be unpredictable.
[1:19] <flaf> I would prefer something like /dev/disks/by-uuid/xxxxx
[1:20] <flaf> But a uuid is available only if there is a file system on the block device.
[1:20] * scuttle|afk is now known as scuttlemonkey
[1:20] <flaf> And In this case, there is not fs, just a raw block device.
[1:21] <gregsfortytwo> disclaimer: I am a horrible excuse for a linux admin
[1:21] <flaf> How do you do? Do you use path like /dev/sdb1 etc.? Is it not a little unsecure?
[1:21] <gregsfortytwo> but I'm pretty sure you're wrong about when the by-uuid symlinks show up
[1:22] <flaf> Ah it's possible. What to you mean?
[1:22] <flaf> what is wrong?
[1:22] <gregsfortytwo> you might need a partition, but if you look at ceph-disk when given a raw disk for the journal I think it makes use of those by-uuid symlinks
[1:23] <badone> and if you use something that is *non-standard* you run the risk of it being broken by some future feature or update
[1:23] <flaf> But if there is a uuid, there is a file system in the partition /dev/sdb1, isn't it?
[1:25] <flaf> If I just create a partition /dev/sdb1 with fdisk, the partition has no uuid.
[1:28] <flaf> In fact, my question is: if for a specific osd, I have "osd journal = /dev/sdb1", /dev/sdb1 must be a simple partition without filesystem, or must be a partition with a fs (xfs etc.)?
[1:31] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:37] <JoeJulian> flaf: What I've done is create udev rules to make a path that is static based on the slot and tray in my sas expander.
[1:39] <flaf> Ah ok JoeJulian. Thx. And the dedicated partition of a osd journal is just a raw partition without filesystem (an uuid), is that correct?
[1:39] * hellertime (~Adium@pool-173-48-56-84.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[1:39] <flaf> s/an / and/
[1:39] <kraken> flaf meant to say: Ah ok JoeJulian. Thx. And the dedicated partition of a osd journal is just a raw partition without filesystem ( anduuid), is that correct?
[1:41] <JoeJulian> So if my SSDs are in slots 1-4 of tray 1, I use /dev/tray1/slota (for example) as the journal for /dev/tray1/slote and let ceph-disk do the partitioning and formatting.
[1:42] <JoeJulian> Later, when you reboot, the ceph udev rules will handle the mounting of those osds and journals.
[1:45] <flaf> Yes I see. Without filesystem and uuid, the udev rules seems to me the only way to be sure of the device path.
[1:45] <flaf> thx JoeJulian. ;)
[1:47] <JoeJulian> You're welcome.
[1:49] * treaki (~treaki@p5B031148.dip0.t-ipconnect.de) Quit (Read error: Connection timed out)
[1:50] * treaki (~treaki@p5B031148.dip0.t-ipconnect.de) has joined #ceph
[1:53] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[1:55] * ircolle (~ircolle@38.122.20.226) Quit (Quit: Leaving.)
[1:55] * bandrus1 (~brian@50.23.113.232) Quit (Quit: Leaving.)
[1:57] * ircolle (~ircolle@38.122.20.226) has joined #ceph
[1:57] * xarses_ (~andreww@209-254-72-194.ip.mcleodusa.net) has joined #ceph
[2:04] * xarses (~andreww@173-164-194-206-SFBA.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[2:06] * tobiash (~quassel@mail.bmw-carit.de) has joined #ceph
[2:07] * mfa298_ (~mfa298@gateway.yapd.net) has joined #ceph
[2:07] * bjoern (~bjoern_of@213-239-215-232.clients.your-server.de) has joined #ceph
[2:07] * todin_ (tuxadero@kudu.in-berlin.de) has joined #ceph
[2:08] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * mfa298 (~mfa298@gateway.yapd.net) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * Sysadmin88 (~IceChat77@2.125.213.8) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * fdmanana (~fdmanana@bl5-5-68.dsl.telepac.pt) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * tdb (~tdb@myrtle.kent.ac.uk) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * olc (~olecam@93.184.35.82) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * lxo (~aoliva@lxo.user.oftc.net) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * xophe_ (~xophe@ows-5-104-102-23.eu-west-1.compute.outscale.com) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * haomaiwa_ (~haomaiwan@115.218.158.93) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * todin (tuxadero@kudu.in-berlin.de) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * jluis (~joao@249.38.136.95.rev.vodafone.pt) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * morse (~morse@supercomputing.univpm.it) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * lurbs (user@uber.geek.nz) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * tobiash_ (~quassel@mail.bmw-carit.de) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * spudly (~spudly@ext-tok.murf.org) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * SteveCapper (~steven@marmot.wormnet.eu) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * _br_ (~bjoern_of@213-239-215-232.clients.your-server.de) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:08] * bjoern is now known as _br_
[2:10] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[2:11] * fdmanana (~fdmanana@bl5-5-68.dsl.telepac.pt) has joined #ceph
[2:12] * ccheng (~ccheng@128.211.165.1) Quit (Remote host closed the connection)
[2:13] * olc (~olecam@93.184.35.82) has joined #ceph
[2:13] * SteveCapper (~steven@marmot.wormnet.eu) has joined #ceph
[2:13] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:14] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[2:14] * Sysadmin88 (~IceChat77@2.125.213.8) has joined #ceph
[2:14] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:14] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[2:14] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[2:14] * xophe_ (~xophe@ows-5-104-102-23.eu-west-1.compute.outscale.com) has joined #ceph
[2:14] * haomaiwa_ (~haomaiwan@115.218.158.93) has joined #ceph
[2:14] * jluis (~joao@249.38.136.95.rev.vodafone.pt) has joined #ceph
[2:14] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[2:14] * lurbs (user@uber.geek.nz) has joined #ceph
[2:14] * spudly (~spudly@ext-tok.murf.org) has joined #ceph
[2:14] * ChanServ sets mode +v jluis
[2:16] * ghost1 (~pablodelg@107-208-117-140.lightspeed.miamfl.sbcglobal.net) has joined #ceph
[2:23] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[2:24] * vasu (~vasu@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:24] * spudly (~spudly@ext-tok.murf.org) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:24] * lurbs (user@uber.geek.nz) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:24] * morse (~morse@supercomputing.univpm.it) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:24] * jluis (~joao@249.38.136.95.rev.vodafone.pt) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:24] * haomaiwa_ (~haomaiwan@115.218.158.93) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:24] * xophe_ (~xophe@ows-5-104-102-23.eu-west-1.compute.outscale.com) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:24] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:24] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:24] * lxo (~aoliva@lxo.user.oftc.net) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:24] * Sysadmin88 (~IceChat77@2.125.213.8) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:24] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:27] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[2:30] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[2:30] * Sysadmin88 (~IceChat77@2.125.213.8) has joined #ceph
[2:30] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:30] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[2:30] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[2:30] * xophe_ (~xophe@ows-5-104-102-23.eu-west-1.compute.outscale.com) has joined #ceph
[2:30] * haomaiwa_ (~haomaiwan@115.218.158.93) has joined #ceph
[2:30] * jluis (~joao@249.38.136.95.rev.vodafone.pt) has joined #ceph
[2:30] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[2:30] * lurbs (user@uber.geek.nz) has joined #ceph
[2:30] * spudly (~spudly@ext-tok.murf.org) has joined #ceph
[2:30] * ircolle (~ircolle@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:31] * jwilkins (~jwilkins@38.122.20.226) Quit (Ping timeout: 480 seconds)
[2:32] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) has joined #ceph
[2:35] * nsantos (~Nelson@bl21-94-62.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[2:39] * BranchPr1dictor (branch@predictor.org.pl) has joined #ceph
[2:40] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[2:40] * bandrus (~brian@50.23.113.232) has joined #ceph
[2:41] * BranchPredictor (branch@predictor.org.pl) Quit (Ping timeout: 480 seconds)
[2:41] * xarses_ (~andreww@209-254-72-194.ip.mcleodusa.net) Quit (Ping timeout: 480 seconds)
[2:41] * spudly (~spudly@ext-tok.murf.org) Quit (charon.oftc.net testlink-beta.oftc.net)
[2:41] * lurbs (user@uber.geek.nz) Quit (charon.oftc.net testlink-beta.oftc.net)
[2:41] * morse (~morse@supercomputing.univpm.it) Quit (charon.oftc.net testlink-beta.oftc.net)
[2:41] * jluis (~joao@249.38.136.95.rev.vodafone.pt) Quit (charon.oftc.net testlink-beta.oftc.net)
[2:41] * haomaiwa_ (~haomaiwan@115.218.158.93) Quit (charon.oftc.net testlink-beta.oftc.net)
[2:41] * xophe_ (~xophe@ows-5-104-102-23.eu-west-1.compute.outscale.com) Quit (charon.oftc.net testlink-beta.oftc.net)
[2:41] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (charon.oftc.net testlink-beta.oftc.net)
[2:41] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (charon.oftc.net testlink-beta.oftc.net)
[2:41] * lxo (~aoliva@lxo.user.oftc.net) Quit (charon.oftc.net testlink-beta.oftc.net)
[2:41] * Sysadmin88 (~IceChat77@2.125.213.8) Quit (charon.oftc.net testlink-beta.oftc.net)
[2:41] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (charon.oftc.net testlink-beta.oftc.net)
[2:44] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[2:45] * zok_ (zok@neurosis.pl) Quit (Read error: Connection reset by peer)
[2:45] * kevinkevin-work (6dbebb8f@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[2:45] * zok (zok@neurosis.pl) has joined #ceph
[2:46] * kevinkevin-work (6dbebb8f@107.161.19.109) has joined #ceph
[2:49] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) has joined #ceph
[2:51] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[2:51] * Sysadmin88 (~IceChat77@2.125.213.8) has joined #ceph
[2:51] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[2:51] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[2:51] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[2:51] * xophe_ (~xophe@ows-5-104-102-23.eu-west-1.compute.outscale.com) has joined #ceph
[2:51] * haomaiwa_ (~haomaiwan@115.218.158.93) has joined #ceph
[2:51] * jluis (~joao@249.38.136.95.rev.vodafone.pt) has joined #ceph
[2:51] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[2:51] * lurbs (user@uber.geek.nz) has joined #ceph
[2:51] * spudly (~spudly@ext-tok.murf.org) has joined #ceph
[2:53] * shyu (~Shanzhi@119.254.196.66) Quit (Remote host closed the connection)
[2:54] * rljohnsn1 (~rljohnsn@ns25.8x8.com) has joined #ceph
[2:55] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Read error: Connection reset by peer)
[2:56] * rmoe (~quassel@12.164.168.117) Quit (Ping timeout: 480 seconds)
[2:57] * spudly (~spudly@ext-tok.murf.org) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:57] * lurbs (user@uber.geek.nz) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:57] * morse (~morse@supercomputing.univpm.it) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:57] * jluis (~joao@249.38.136.95.rev.vodafone.pt) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:57] * haomaiwa_ (~haomaiwan@115.218.158.93) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:57] * xophe_ (~xophe@ows-5-104-102-23.eu-west-1.compute.outscale.com) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:57] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:57] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:57] * lxo (~aoliva@lxo.user.oftc.net) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:57] * Sysadmin88 (~IceChat77@2.125.213.8) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:57] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (reticulum.oftc.net testlink-beta.oftc.net)
[2:57] * jclm (~jclm@209.49.224.62) Quit (Quit: Leaving.)
[3:03] * Ceph-Log-Bot___ (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[3:03] * bandrus (~brian@50.23.113.232) Quit (Quit: Leaving.)
[3:04] * Ceph-Log-Bot___ (~logstash@185.66.248.215) has joined #ceph
[3:04] * Hazelesque (~hazel@2a03:9800:10:13::2) Quit (Quit: No Ping reply in 180 seconds.)
[3:04] * Hazelesque (~hazel@2a03:9800:10:13::2) has joined #ceph
[3:07] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[3:07] * Sysadmin88 (~IceChat77@2.125.213.8) has joined #ceph
[3:07] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:07] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[3:07] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) has joined #ceph
[3:07] * xophe_ (~xophe@ows-5-104-102-23.eu-west-1.compute.outscale.com) has joined #ceph
[3:07] * haomaiwa_ (~haomaiwan@115.218.158.93) has joined #ceph
[3:07] * jluis (~joao@249.38.136.95.rev.vodafone.pt) has joined #ceph
[3:07] * morse (~morse@supercomputing.univpm.it) has joined #ceph
[3:07] * lurbs (user@uber.geek.nz) has joined #ceph
[3:07] * spudly (~spudly@ext-tok.murf.org) has joined #ceph
[3:10] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:11] * cholcombe973 (~chris@pool-108-42-144-175.snfcca.fios.verizon.net) has left #ceph
[3:12] * moore (~moore@97-124-123-201.phnx.qwest.net) has joined #ceph
[3:14] * joef1 (~Adium@2601:9:280:f2e:5c3:ad00:b0b4:e833) has joined #ceph
[3:14] * joef1 (~Adium@2601:9:280:f2e:5c3:ad00:b0b4:e833) has left #ceph
[3:16] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[3:16] * TiCPU (~ticpu@2001:470:b010:1::10) has joined #ceph
[3:16] * rmoe (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) has joined #ceph
[3:19] <TiCPU> good evening all, I'm using a cluster for libvirt/RBD, what tool would you use to display textually or to visualize performance data either by OSD or by RBD clients, read/write throughput, IOPS and cache to know where the bottleneck could be? I have an issue with machine randomly having latency from 200 ms to 1 s.
[3:19] <TiCPU> Currently on Ceph 0.80.8, 6 machines, 12 OSD, 6 SSD journals
[3:20] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit (Quit: Leaving.)
[3:22] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[3:22] * kefu (~kefu@114.92.100.153) has joined #ceph
[3:22] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[3:23] * cdelatte (~cdelatte@67.197.3.123) has joined #ceph
[3:25] * sputnik13 (~sputnik13@74.202.214.170) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:26] * moore (~moore@97-124-123-201.phnx.qwest.net) Quit (Remote host closed the connection)
[3:26] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit ()
[3:28] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) Quit (Quit: Away)
[3:30] * danieagle (~Daniel@200-148-39-11.dsl.telesp.net.br) has joined #ceph
[3:31] * TiCPU (~ticpu@2001:470:b010:1::10) Quit (Remote host closed the connection)
[3:32] * cdelatte (~cdelatte@67.197.3.123) Quit (Quit: This computer has gone to sleep)
[3:32] * jamespd_ (~mucky@mucky.socket7.org) Quit (Remote host closed the connection)
[3:33] * jamespd (~mucky@mucky.socket7.org) has joined #ceph
[3:34] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[3:40] * JustSomeone (~test@198.52.199.104) has joined #ceph
[3:40] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) has joined #ceph
[3:44] * JustSomeone (~test@198.52.199.104) Quit (Quit: WeeChat 0.4.3)
[3:47] * rljohnsn1 (~rljohnsn@ns25.8x8.com) Quit (Ping timeout: 480 seconds)
[3:51] * xarses_ (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:53] * TiCPU (~ticpu@2001:470:b010:1::10) has joined #ceph
[3:53] <TiCPU> sorry, had a crash
[4:04] <badone> TiCPU: maybe start with a apcket capture and look for dealys in that? That should tell you which "machine" is slow, then focus there
[4:06] <TiCPU> I was looking more into a tool that uses data from the ceph admin socket, right now I have check_mk showing exactly which machine experience latency and when; however, the hosts do not seem overload
[4:07] <badone> TiCPU: what is check_mk?
[4:07] <TiCPU> an interface/check aggreagator for nagios
[4:08] <badone> that explains why I don't know what is is at least ;)
[4:08] <badone> TiCPU: okay, if you know what machines then I would try to capture perf data when it is actually "slow"
[4:14] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[4:14] * shyu (~Shanzhi@119.254.196.66) Quit (Read error: Connection reset by peer)
[4:14] * joef1 (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) has joined #ceph
[4:14] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[4:16] * joef2 (~Adium@2620:79:0:2420::13) has joined #ceph
[4:20] * joef2 (~Adium@2620:79:0:2420::13) Quit (Read error: Connection reset by peer)
[4:22] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[4:22] * joef1 (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:23] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) has joined #ceph
[4:34] * shyu (~Shanzhi@119.254.196.66) Quit (Ping timeout: 480 seconds)
[4:34] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) Quit (Quit: WeeChat 1.1.1)
[4:38] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) has joined #ceph
[4:42] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Read error: Connection reset by peer)
[4:43] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[4:47] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[4:48] * joef1 (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) has joined #ceph
[4:48] * joef1 (~Adium@c-24-130-254-66.hsd1.ca.comcast.net) Quit ()
[4:48] * joef1 (~Adium@2601:9:280:f2e:fc92:4e64:fb03:8e19) has joined #ceph
[4:48] * joef1 (~Adium@2601:9:280:f2e:fc92:4e64:fb03:8e19) has left #ceph
[4:48] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) Quit (Remote host closed the connection)
[4:54] * danieagle (~Daniel@200-148-39-11.dsl.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[4:55] * OutOfNoWhere (~rpb@199.68.195.102) has joined #ceph
[5:15] * georgem (~Adium@69-165-159-72.dsl.teksavvy.com) Quit (Quit: Leaving.)
[5:29] * shyu (~Shanzhi@119.254.196.66) Quit (Ping timeout: 480 seconds)
[5:29] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[5:37] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[5:42] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[5:44] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[5:45] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Quit: segutier)
[5:47] * badone_ is now known as badone
[5:48] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[5:49] * ghost1 (~pablodelg@107-208-117-140.lightspeed.miamfl.sbcglobal.net) Quit (Quit: ghost1)
[5:56] * shyu (~Shanzhi@119.254.196.66) Quit (Remote host closed the connection)
[5:58] * Vacuum_ (~vovo@i59F79B0C.versanet.de) has joined #ceph
[6:05] * OutOfNoWhere (~rpb@199.68.195.102) Quit (Ping timeout: 480 seconds)
[6:05] * Vacuum (~vovo@88.130.215.148) Quit (Ping timeout: 480 seconds)
[6:08] * CephTestC (~CephTestC@199.91.185.156) Quit (Ping timeout: 480 seconds)
[6:09] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[6:16] * p01s0n (~oftc-webi@hpm01cs001-ext.asiapac.hp.net) has joined #ceph
[6:19] * PaulC (~paul@122-60-36-115.jetstream.xtra.co.nz) Quit (Remote host closed the connection)
[6:20] <p01s0n> hi i have 3 node ceph cluster each with 3 osd and "osd pool default min size = 2" is set and i think with this my cluster can handle one node failure out of 3.Is there any configurable to handle osd failures?
[6:23] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) has joined #ceph
[6:25] * rdas (~rdas@121.244.87.116) has joined #ceph
[6:29] * ghost1 (~pablodelg@107-208-117-140.lightspeed.miamfl.sbcglobal.net) has joined #ceph
[6:31] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) Quit (Ping timeout: 480 seconds)
[6:34] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[6:37] * ircolle (~ircolle@216.1.187.164) has joined #ceph
[6:38] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) has joined #ceph
[6:40] * vbellur (~vijay@121.244.87.117) has joined #ceph
[6:41] * avozza (~avozza@83.162.204.36) has joined #ceph
[6:43] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[6:56] * zack_dolby (~textual@aa20111001946AB81592.userreverse.dion.ne.jp) has joined #ceph
[6:56] * avozza (~avozza@83.162.204.36) Quit (Remote host closed the connection)
[6:58] * overclk (~overclk@121.244.87.117) has joined #ceph
[7:02] * jwilkins (~jwilkins@216.1.187.164) has joined #ceph
[7:03] * ircolle (~ircolle@216.1.187.164) Quit (Ping timeout: 480 seconds)
[7:03] * jclm (~jclm@mac0536d0.tmodns.net) has joined #ceph
[7:06] * davidzlap1 (~Adium@2605:e000:1313:8003:603b:6fe6:6103:dbe0) has joined #ceph
[7:06] * sage (~quassel@2605:e000:854d:de00:230:48ff:fed3:6786) Quit (Read error: Connection reset by peer)
[7:06] * sage (~quassel@2605:e000:854d:de00:230:48ff:fed3:6786) has joined #ceph
[7:06] * ChanServ sets mode +o sage
[7:09] * mookins_ (~mookins@induct3.lnk.telstra.net) has joined #ceph
[7:09] * shyu (~Shanzhi@119.254.196.66) Quit (Read error: Connection reset by peer)
[7:10] * davidzlap (~Adium@2605:e000:1313:8003:7456:1013:da06:fd6f) Quit (Ping timeout: 480 seconds)
[7:14] * mookins (~mookins@induct3.lnk.telstra.net) Quit (Ping timeout: 480 seconds)
[7:17] * mookins (~mookins@induct3.lnk.telstra.net) has joined #ceph
[7:17] * jwilkins (~jwilkins@216.1.187.164) Quit (Ping timeout: 480 seconds)
[7:17] * mookins_ (~mookins@induct3.lnk.telstra.net) Quit (Ping timeout: 480 seconds)
[7:22] * mookins_ (~mookins@induct3.lnk.telstra.net) has joined #ceph
[7:25] * mookins (~mookins@induct3.lnk.telstra.net) Quit (Ping timeout: 480 seconds)
[7:28] * p01s0n (~oftc-webi@hpm01cs001-ext.asiapac.hp.net) Quit (Remote host closed the connection)
[7:29] * ghost1 (~pablodelg@107-208-117-140.lightspeed.miamfl.sbcglobal.net) Quit (Quit: ghost1)
[7:31] * mookins_ (~mookins@induct3.lnk.telstra.net) Quit (Read error: Connection reset by peer)
[7:32] * mookins (~mookins@induct3.lnk.telstra.net) has joined #ceph
[7:33] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[7:34] * p01s0n (~oftc-webi@hpm01cs002-ext.asiapac.hp.net) has joined #ceph
[7:35] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[7:43] * shyu (~Shanzhi@119.254.196.66) has joined #ceph
[7:56] * linjan (~linjan@195.110.41.9) has joined #ceph
[8:00] * lalatenduM (~lalatendu@121.244.87.124) has joined #ceph
[8:01] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[8:01] * saltlake (~saltlake@pool-71-244-62-208.dllstx.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[8:02] * lalatenduM (~lalatendu@121.244.87.124) Quit ()
[8:02] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[8:05] * Nacer (~Nacer@2001:41d0:fe82:7200:6d59:552:42b8:b36) has joined #ceph
[8:06] * fattaneh (~fattaneh@194.225.33.200) has joined #ceph
[8:06] * cok (~chk@2a02:2350:18:1010:4597:89f1:baf0:3bad) has joined #ceph
[8:10] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) has joined #ceph
[8:12] * stannum (~stannum@85.233.67.10) has joined #ceph
[8:13] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[8:13] * fattaneh (~fattaneh@194.225.33.200) has left #ceph
[8:23] * jclm (~jclm@mac0536d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[8:25] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) Quit (Quit: Leaving.)
[8:25] * puffy (~puffy@50.185.218.255) Quit (Quit: Leaving.)
[8:26] * vilobhmm (~vilobhmm@c-24-23-129-93.hsd1.ca.comcast.net) Quit (Quit: Away)
[8:27] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) Quit (Quit: Ex-Chat)
[8:29] * PaulC (~paul@122-60-36-115.jetstream.xtra.co.nz) has joined #ceph
[8:29] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) has joined #ceph
[8:30] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[8:32] * jclm (~jclm@ip-64-134-224-99.public.wayport.net) has joined #ceph
[8:37] * Nacer (~Nacer@2001:41d0:fe82:7200:6d59:552:42b8:b36) Quit (Remote host closed the connection)
[8:37] * Nacer (~Nacer@2001:41d0:fe82:7200:6d59:552:42b8:b36) has joined #ceph
[8:40] * Sysadmin88 (~IceChat77@2.125.213.8) Quit (Quit: There's nothing dirtier then a giant ball of oil)
[8:43] * kefu (~kefu@114.92.100.153) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[8:45] * Nacer (~Nacer@2001:41d0:fe82:7200:6d59:552:42b8:b36) Quit (Ping timeout: 480 seconds)
[8:48] * avozza (~avozza@a83-160-116-36.adsl.xs4all.nl) has joined #ceph
[8:48] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[8:49] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:50] <stannum> Hi, folks! Is it possible that pool_on_SSD_OSDs is slower than Erasure_Coded pool_on_Spindle_SATA_OSDs, there is Total 6 OSD hosts with 2 bonded 1G NIC, each have 15 SATA OSDs and 2 SSD OSDs. SSD OSDs are separated by CRUSH rule to different SSD-pool. The question is that why I get 'rados bench' results for SSD-pool 2 times slower than EC-pool, but dd with oflag=direct on each hosts shows that SSD disks are minimum 2 times faster that SATA (110 MB/s vs 258
[8:50] <stannum> MB/s)
[8:50] <Be-El> hi
[8:59] * ohnomrbill (~ohnomrbil@c-67-174-241-112.hsd1.ca.comcast.net) Quit (Quit: ohnomrbill)
[8:59] * ohnomrbill (~ohnomrbil@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[9:00] * haomaiwa_ (~haomaiwan@115.218.158.93) Quit (Remote host closed the connection)
[9:01] * phoenix (~phoenix@vpn1.safedata.ru) has joined #ceph
[9:01] <phoenix> hi men
[9:02] * nils_ (~nils@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[9:02] <phoenix> someone can help me with ceph claster?
[9:02] * thomnico (~thomnico@82.166.93.197) has joined #ceph
[9:03] <Be-El> phoenix: feel free to ask questions. if someone around is not busy and able to give you an answer, he/she probably will
[9:04] <phoenix> replace outdated server to another in a cluster, then all the drives on the new server to the state stood down. I suffer for a long time, nothing helps.
[9:04] <phoenix> -6 10 host datanode4
[9:04] <phoenix> 26 1 osd.26 down 0
[9:04] <phoenix> 27 1 osd.27 down 0
[9:04] <phoenix> 28 1 osd.28 down 0
[9:04] <phoenix> 29 1 osd.29 down 0
[9:04] <phoenix> 30 1 osd.30 down 0
[9:04] <phoenix> 31 1 osd.31 down 0
[9:04] <phoenix> 32 1 osd.32 down 0
[9:04] <phoenix> 33 1 osd.33 down 0
[9:04] <phoenix> 34 1 osd.34 down 0
[9:04] <phoenix> 35 1 osd.35 down 0
[9:04] <phoenix> 36 1 osd.36 down 0
[9:05] * kefu (~kefu@114.92.100.153) has joined #ceph
[9:05] <phoenix> but on the server, everything is OK
[9:06] <phoenix> have an idea what's wrong?
[9:06] * jclm (~jclm@ip-64-134-224-99.public.wayport.net) Quit (Ping timeout: 480 seconds)
[9:07] <Be-El> you can have a look at the osd logs (usually in /var/log/ceph/)
[9:07] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) Quit (Ping timeout: 480 seconds)
[9:09] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[9:09] <phoenix> no error in log: 2015-02-25 10:55:15.178363 7f527e23e7c0 0 ceph version 0.80.6 (f93610a4421cb670b08e974c6550ee715ac528ae), process ceph-osd, pid 23522
[9:09] <phoenix> 2015-02-25 10:55:15.187552 7f527e23e7c0 0 filestore(/var/lib/ceph/osd/ceph-26) mount detected xfs (libxfs)
[9:09] <phoenix> 2015-02-25 10:55:15.187620 7f527e23e7c0 1 filestore(/var/lib/ceph/osd/ceph-26) disabling 'filestore replica fadvise' due to known issues with fadvise(DONTNEED) on xfs
[9:09] <phoenix> 2015-02-25 10:55:15.272908 7f527e23e7c0 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-26) detect_features: FIEMAP ioctl is supported and appears to work
[9:09] <phoenix> 2015-02-25 10:55:15.272947 7f527e23e7c0 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-26) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
[9:09] <phoenix> 2015-02-25 10:55:15.588334 7f527e23e7c0 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-26) detect_features: syscall(SYS_syncfs, fd) fully supported
[9:09] <phoenix> 2015-02-25 10:55:15.588491 7f527e23e7c0 0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-26) detect_feature: extsize is disabled by conf
[9:09] <phoenix> 2015-02-25 10:55:15.639239 7f527e23e7c0 0 filestore(/var/lib/ceph/osd/ceph-26) mount: enabling WRITEAHEAD journal mode: checkpoint is not enabled
[9:09] <phoenix> 2015-02-25 10:55:15.639574 7f527e23e7c0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
[9:09] <phoenix> 2015-02-25 10:55:15.639610 7f527e23e7c0 1 journal _open /var/lib/ceph/osd/ceph-26/journal fd 20: 2097152000 bytes, block size 4096 bytes, directio = 1, aio = 0
[9:09] <phoenix> 2015-02-25 10:55:15.655688 7f527e23e7c0 1 journal _open /var/lib/ceph/osd/ceph-26/journal fd 20: 2097152000 bytes, block size 4096 bytes, directio = 1, aio = 0
[9:09] <phoenix> 2015-02-25 10:55:15.656757 7f527e23e7c0 1 journal close /var/lib/ceph/osd/ceph-26/journal
[9:09] <phoenix> 2015-02-25 10:55:15.672386 7f527e23e7c0 0 filestore(/var/lib/ceph/osd/ceph-26) mount detected xfs (libxfs)
[9:09] <phoenix> 2015-02-25 10:55:15.722357 7f527e23e7c0 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-26) detect_features: FIEMAP ioctl is supported and appears to work
[9:09] <phoenix> 2015-02-25 10:55:15.722378 7f527e23e7c0 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-26) detect_features: FIEMAP ioctl is disabled via 'filestore fiemap' config option
[9:09] <phoenix> 2015-02-25 10:55:15.796891 7f527e23e7c0 0 genericfilestorebackend(/var/lib/ceph/osd/ceph-26) detect_features: syscall(SYS_syncfs, fd) fully supported
[9:09] <phoenix> 2015-02-25 10:55:15.797030 7f527e23e7c0 0 xfsfilestorebackend(/var/lib/ceph/osd/ceph-26) detect_feature: extsize is disabled by conf
[9:09] <phoenix> 2015-02-25 10:55:15.864249 7f527e23e7c0 0 filestore(/var/lib/ceph/osd/ceph-26) mount: WRITEAHEAD journal mode explicitly enabled in conf
[9:09] <phoenix> 2015-02-25 10:55:15.864515 7f527e23e7c0 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway
[9:09] <phoenix> 2015-02-25 10:55:15.864546 7f527e23e7c0 1 journal _open /var/lib/ceph/osd/ceph-26/journal fd 21: 2097152000 bytes, block size 4096 bytes, directio = 1, aio = 0
[9:09] <phoenix> 2015-02-25 10:55:15.864678 7f527e23e7c0 1 journal _open /var/lib/ceph/osd/ceph-26/journal fd 21: 2097152000 bytes, block size 4096 bytes, directio = 1, aio = 0
[9:09] <phoenix> 2015-02-25 10:55:15.866767 7f527e23e7c0 0 <cls> cls/hello/cls_hello.cc:271: loading cls_hello
[9:09] <phoenix> 2015-02-25 10:55:15.878438 7f527e23e7c0 0 osd.26 35454 crush map has features 1107558400, adjusting msgr requires for clients
[9:09] <phoenix> 2015-02-25 10:55:15.878448 7f527e23e7c0 0 osd.26 35454 crush map has features 1107558400 was 8705, adjusting msgr requires for mons
[9:09] <phoenix> 2015-02-25 10:55:15.878451 7f527e23e7c0 0 osd.26 35454 crush map has features 1107558400, adjusting msgr requires for osds
[9:09] <phoenix> 2015-02-25 10:55:15.878461 7f527e23e7c0 0 osd.26 35454 load_pgs
[9:09] <phoenix> 2015-02-25 10:55:15.878492 7f527e23e7c0 0 osd.26 35454 load_pgs opened 0 pgs
[9:10] <phoenix> 2015-02-25 10:55:15.886013 7f526e5b7700 0 osd.26 35454 ignoring osdmap until we have initialized
[9:10] <phoenix> 2015-02-25 10:55:15.886434 7f526e5b7700 0 osd.26 35454 ignoring osdmap until we have initialized
[9:10] <phoenix> 2015-02-25 10:55:15.886664 7f527e23e7c0 0 osd.26 35454 done with init, starting boot process
[9:12] <Be-El> it's better to use a paste bin for more than 2-3 lines of log
[9:12] <Be-El> but as far as i can tell the osd looks fine
[9:13] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[9:14] <phoenix> That I do not understand why they do not want to take the position online
[9:14] <Be-El> did you set some flag for the cluster like noout during the maintanence?
[9:15] <phoenix> I think anything in the config does not have any. How does this look? :)
[9:15] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[9:17] <phoenix> [osd]
[9:17] <phoenix> osd journal size = 2000
[9:17] <phoenix> osd mkfs type = xfs
[9:17] <phoenix> osd mkfs options xfs = -f -i size=2048
[9:17] <phoenix> osd mount options xfs = rw,noatime,inode64
[9:17] <phoenix> filestore xattr use omap = true
[9:17] <phoenix> it's all there in the config relatively osd
[9:17] <Be-El> can you upload the output of 'ceph -s' to a paste bin?
[9:18] <phoenix> paste bin - it's like?
[9:19] <Be-El> you can upload the text to http://pastebin.com/ and paste the resulting url here
[9:19] <phoenix> ok no problems
[9:22] <phoenix> http://pastebin.com/A4LbtJa2
[9:22] * dgurtner (~dgurtner@178.197.231.49) has joined #ceph
[9:22] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:23] <ZyTer> hi all
[9:25] * thomnico (~thomnico@82.166.93.197) Quit (Remote host closed the connection)
[9:25] <ZyTer> i try install calamari, but after, vagrant up (the VM boot etc... all ok, the last line : run_highstate set to false. Not running state.highstate.)
[9:25] <ZyTer> i try vagrant ssh, and : salt-call state.highstate
[9:26] <ZyTer> but nothing : -bash: salt-call: command not found
[9:26] <Be-El> phoenix: one remark...you should setup a third monitor to avoid problems if one monitor fails
[9:26] <Be-El> phoenix: the cluster status looks ok. i've no clue why the osd is not reported as up
[9:26] <Be-El> phoenix: last attempt would be restarting one of the osd processes
[9:27] <ZyTer> would you have an idea bypass this problem...
[9:29] <phoenix> I know about the monitors, thank you. Now I am more concerned about the OSD.
[9:29] <phoenix> The process of trying to restart. does not help. even tried his hands to indicate: ceph osd in 26. but the result is the same.
[9:29] * analbeard (~shw@support.memset.com) has joined #ceph
[9:31] <phoenix> boss is ready to bite my head already :)
[9:31] <phoenix> as already I bet not the first day with this problem and do not know where my curve hands have done wrong :(
[9:32] <phoenix> any ideas?
[9:32] * kawa2014 (~kawa@90.216.134.197) has joined #ceph
[9:33] <Be-El> you remove the osd from the cluster and reinstall them afterwards
[9:34] <stannum> 2phoenix: since cluster health is ok, just remove OSD's from cluster ceph osd down osd.$num ; ceph osd out osd.$num ; ceph osd crush remove osd.$num ; ceph auth del osd.$num ; ceph osd rm osd.$num
[9:34] <stannum> and then format and ceph-deploy new OSD's
[9:34] <Be-El> but be warned that removing the osd will alter the crush map and result in data movement in the cluster
[9:35] <Be-El> and re-adding them will again alter the crush map...
[9:35] <stannum> if OSD in and up there will no any data movements
[9:36] <stannum> yes readding will cause data distribution
[9:36] <stannum> REdistribution
[9:37] * cok (~chk@2a02:2350:18:1010:4597:89f1:baf0:3bad) Quit (Quit: Leaving.)
[9:39] * ibuclaw (~ibuclaw@host81-150-190-145.in-addr.btopenworld.com) has joined #ceph
[9:39] <ibuclaw> Hi, does the debian repo maintainer lurk around here?
[9:39] <Be-El> phoenix: start with one osd, remove it and reinstall it afterwards to check whether it will be recognized by the cluster
[9:47] <phoenix> I was doing this operation, but did not help.
[9:47] <phoenix> I have a suspicion that the problems in the operating system. Since after installing all the disks were in softvarny raid. And I'm falling apart raid hands.
[9:47] <phoenix> More and more I am inclined to think that it is necessary perezalit server entirely.
[9:48] <phoenix> and try to enter the disc again from scratch.
[9:49] <phoenix> I even tried dd = /dev/zero to disk, and then enter it into the system does not work.
[9:54] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[9:57] * thb (~me@0001bd58.user.oftc.net) has joined #ceph
[9:58] <fvl> ???? ???? ???? ??????, ???????????? ?????????? ???? ????????????!
[9:58] <phoenix> ???? ?????????????
[9:58] <fvl> ????????????????
[9:58] <phoenix> ?? ?? ???? ??????????????
[9:59] <phoenix> %)
[9:59] * mookins (~mookins@induct3.lnk.telstra.net) Quit (Remote host closed the connection)
[9:59] <phoenix> ???????????? ???????? ?????? ?????? ?????? ?????????????????????
[9:59] * mookins (~mookins@induct3.lnk.telstra.net) has joined #ceph
[9:59] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[10:00] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[10:01] * itamarl (~itamar@194.90.7.244) has joined #ceph
[10:01] <itamarl> burley_: Good morning
[10:02] * cooldharma06 (~chatzilla@14.139.180.52) has joined #ceph
[10:02] <itamarl> good news regarding ceph-disk issue in rh7, I found a way to fix it and let ceph-disk create the partitions
[10:02] <fvl> phoenix: ???????????? ???????? ????????????????
[10:02] * jtang (~jtang@109.255.42.21) Quit (Remote host closed the connection)
[10:03] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[10:04] <itamarl> I commented out the part of using partx for rh flavoured OS as partprobe works just fine on rhel7
[10:05] <itamarl> it's in methid
[10:05] <itamarl> it's in method update_partition
[10:08] <phoenix> ?????????????? ?? ?????? ????????????? ?????? ????????????????????
[10:09] <phoenix> ?????????? ???? ???????? ????????????. ???????? ???????????? ???????????? ????????????, ???? ???????????? ???????????? ???? ???????? ?????
[10:11] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[10:24] <p01s0n> how is the total available space calculated in ceph? i have 3 nodes, with 3 osd each 50G, total 450G,with replication value of 2.But ceph log shows 838 MB used, 403 GB .Can some one help me understand this better
[10:25] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:30] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[10:30] <stannum> you can view total available space with command: ceph df
[10:30] <stannum> it shows space by pools, depending on its replica sizes
[10:35] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[10:37] * Miouge (~Miouge@94.136.92.20) Quit (Ping timeout: 480 seconds)
[10:38] <p01s0n> thanks stannum
[10:42] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[10:44] * jordanP (~jordan@213.215.2.194) has joined #ceph
[10:46] * markl (~mark@knm.org) Quit (Ping timeout: 480 seconds)
[10:47] * jtang (~jtang@109.255.42.21) has joined #ceph
[10:48] * thomnico (~thomnico@82.166.93.197) has joined #ceph
[10:49] * branto (~borix@178-253-138-120.3pp.slovanet.sk) has joined #ceph
[10:55] * cok (~chk@2a02:2350:18:1010:20e9:6297:3ba0:e870) has joined #ceph
[11:01] * dmick (~dmick@2607:f298:a:607:c5ec:52cf:f46:69f5) Quit (Ping timeout: 480 seconds)
[11:03] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has left #ceph
[11:05] * thomnico (~thomnico@82.166.93.197) Quit (Ping timeout: 480 seconds)
[11:06] * mookins (~mookins@induct3.lnk.telstra.net) Quit ()
[11:07] * jtang (~jtang@109.255.42.21) Quit (Remote host closed the connection)
[11:11] * dmick (~dmick@2607:f298:a:607:2d70:32ce:ee23:a470) has joined #ceph
[11:11] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[11:14] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[11:15] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[11:16] * nsantos (~Nelson@gw.cnc.uc.pt) has joined #ceph
[11:20] <p01s0n> is there any way to determine the partition space required for storing journal
[11:20] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[11:23] * macjack1 (~Thunderbi@123.51.160.200) has joined #ceph
[11:23] * thb (~me@0001bd58.user.oftc.net) Quit (Remote host closed the connection)
[11:26] * macjack2 (~Thunderbi@123.51.160.200) has joined #ceph
[11:26] * macjack2 (~Thunderbi@123.51.160.200) Quit (Remote host closed the connection)
[11:26] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[11:28] * macjack (~Thunderbi@123.51.160.200) Quit (Ping timeout: 480 seconds)
[11:28] * thb (~me@0001bd58.user.oftc.net) has joined #ceph
[11:28] * Miouge (~Miouge@94.136.92.20) Quit ()
[11:28] * zack_dolby (~textual@aa20111001946AB81592.userreverse.dion.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:30] <stannum> p01s0n: I do not know, but I'm used ceph-deploy for this, and it makes ~5350MB partition on 2.7TB disk for Journal
[11:31] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[11:31] * macjack1 (~Thunderbi@123.51.160.200) Quit (Ping timeout: 480 seconds)
[11:33] <jcsp> more specifically??? 478 OPTION(osd_journal_size, OPT_INT, 5120)
[11:33] <jcsp> ceph-disk takes that ceph setting as its default
[11:33] <jcsp> so "about 5 gigs" is the answer
[11:36] * macjack (~Thunderbi@123.51.160.200) has joined #ceph
[11:36] * anorak (~anorak@62.27.88.230) has joined #ceph
[11:37] * nsantos (~Nelson@gw.cnc.uc.pt) Quit (Quit: Leaving)
[11:37] * nsantos (~Nelson@gw.cnc.uc.pt) has joined #ceph
[11:39] <p01s0n> Thanks stannum and jcsp :)
[11:47] * branto (~borix@178-253-138-120.3pp.slovanet.sk) Quit (Ping timeout: 480 seconds)
[11:49] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[11:49] * jtang (~jtang@109.255.42.21) has joined #ceph
[11:50] * shyu (~Shanzhi@119.254.196.66) Quit (Remote host closed the connection)
[11:59] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[12:01] <p01s0n> for applying changes in OSD parameters do we need to edit conf file on all nodes or will changing in one of the node and restarting osd is enough.Right now i am changing on all nodes and i dont know how to check whether it has changed or not.Can some one please help me :)
[12:08] <anorak> if i am not mistaken....if you deployed the ceph nodes using the admin node, you can propagate the same file to all your nodes using the admin node
[12:10] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:11] <bitserker> hi all
[12:11] <p01s0n> thanks anorak ,how can we see the current parameters used, suppose i want to see this (osd op threads) parameter from a running cluster,can this be seen using CLI
[12:12] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[12:13] <bitserker> there are any way to know which OSDs are used to store a determinate image?
[12:15] <bitserker> p01s0n: http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/
[12:15] <bitserker> p01s0n: RUNTIME section
[12:16] <anorak> ok...wait...looking
[12:17] <anorak> ah...if i have not misunderstood, you want to view the ops on debug level.correct?
[12:18] <anorak> bitserker has pointed out correctly. Did not see bitserker's reply :)
[12:18] <bitserker> anorak: :)
[12:18] * nsantos (~Nelson@gw.cnc.uc.pt) Quit (Ping timeout: 480 seconds)
[12:19] <bitserker> there are any way to know which OSDs are used to store a determinate image? tnx in advance :D
[12:21] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[12:21] <flaf> p01s0n: "ceph daemon osd.$id config show --cluster $cluster --id $account" and you have the current config of "osd.$id" (it's verbose, so grep etc).
[12:21] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[12:21] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit ()
[12:26] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) has joined #ceph
[12:27] * lifeboy (~roland@196.45.29.44) has joined #ceph
[12:28] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[12:28] * vbellur (~vijay@121.244.87.124) has joined #ceph
[12:32] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[12:34] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) Quit (Ping timeout: 480 seconds)
[12:37] * thomnico (~thomnico@82.166.93.197) has joined #ceph
[12:39] <bitserker> there are any way to know which OSDs are used to store a determinate image?
[12:41] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[12:41] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[12:42] * kefu (~kefu@114.92.100.153) Quit (Ping timeout: 480 seconds)
[12:43] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[12:43] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[12:45] * kefu (~kefu@114.92.100.153) has joined #ceph
[12:46] <flaf> bitserker: you can list objects of a pool with "rados -p $pool ls -" and you can know which osds contain a specific object with "ceph osd map $pool $object_name", but I don't know how you can know which objects belong to a specific rbd image, sorry.
[12:46] * swami1 (~swami@49.32.0.177) has joined #ceph
[12:47] * phoenix (~phoenix@vpn1.safedata.ru) Quit ()
[12:47] <flaf> The objects of your rbd image is probably spread in all your osd. I suppose...
[12:47] * ganders (~root@200.32.121.70) has joined #ceph
[12:48] * p01s0n (~oftc-webi@hpm01cs002-ext.asiapac.hp.net) Quit (Quit: Page closed)
[12:48] * kefu (~kefu@114.92.100.153) Quit (Max SendQ exceeded)
[12:53] * vbellur (~vijay@121.244.87.117) has joined #ceph
[12:56] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[12:59] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[13:02] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[13:02] <bitserker> flaf: i'm using ceph with proxmox. I have thought that this information can be useful to determine which VM is consuming disk operations
[13:02] <bitserker> flaf: :-/
[13:02] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[13:03] * wangqty (~qiang@111.204.252.6) has joined #ceph
[13:03] <bitserker> flaf: tnx a lot. im going to test it
[13:04] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[13:06] * stannum (~stannum@85.233.67.10) Quit (Quit: Konversation terminated!)
[13:06] * stannum (~stannum@85.233.67.10) has joined #ceph
[13:07] * Nats_ (~natscogs@114.31.195.238) Quit (Ping timeout: 480 seconds)
[13:10] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[13:11] * thomnico (~thomnico@82.166.93.197) Quit (Remote host closed the connection)
[13:17] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:24] * thomnico (~thomnico@82.166.93.197) has joined #ceph
[13:28] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[13:30] * dgurtner_ (~dgurtner@178.197.225.28) has joined #ceph
[13:32] * kefu (~kefu@114.92.100.153) has joined #ceph
[13:32] * dgurtner (~dgurtner@178.197.231.49) Quit (Ping timeout: 480 seconds)
[13:32] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[13:35] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:35] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[13:35] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[13:37] * visualne (~oftc-webi@158-147-148-234.harris.com) Quit (Quit: Page closed)
[13:38] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[13:38] * jks (~jks@178.155.151.121) has joined #ceph
[13:41] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[13:45] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[13:46] * marrusl (~mark@cpe-24-90-46-248.nyc.res.rr.com) Quit (Remote host closed the connection)
[13:46] * kefu (~kefu@114.92.100.153) Quit (Quit: Textual IRC Client: www.textualapp.com)
[13:46] * cooldharma06 (~chatzilla@14.139.180.52) Quit (Remote host closed the connection)
[13:50] * marrusl (~mark@cpe-24-90-46-248.nyc.res.rr.com) has joined #ceph
[13:50] * tupper (~tcole@rtp-isp-nat-pool1-1.cisco.com) Quit (Read error: Connection reset by peer)
[13:52] * todin_ is now known as todin
[13:53] * georgem (~Adium@207.164.79.23) has joined #ceph
[14:01] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) has joined #ceph
[14:02] * Manshoon (~Manshoon@c-50-181-29-219.hsd1.wv.comcast.net) Quit (Remote host closed the connection)
[14:03] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[14:06] * tupper (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) has joined #ceph
[14:06] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:08] * MrBy (~MrBy@85.115.23.2) Quit (Ping timeout: 480 seconds)
[14:11] * sugoruyo (~sug_@00014f5c.user.oftc.net) has joined #ceph
[14:13] <sugoruyo> hi folks, has anyone noticed that when you `rados -p <pool> stat <object>` and <object> doesn't exist it then shows up in `rados -p <pool> ls`?
[14:17] * linjan (~linjan@213.8.240.146) has joined #ceph
[14:17] <ZyTer> For calamari, i try an ubuntu precise and an debian, on both, i have a problem with salt : "salt-call: command not found" , have you an issue, or it works with another distrib' ?
[14:18] * vbellur (~vijay@121.244.87.124) has joined #ceph
[14:19] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) has joined #ceph
[14:23] * cdelatte (~cdelatte@204-235-114.64.twcable.com) has joined #ceph
[14:23] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:25] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[14:30] * zack_dolby (~textual@pa3b3a1.tokynt01.ap.so-net.ne.jp) has joined #ceph
[14:32] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[14:34] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[14:36] * georgem (~Adium@207.164.79.23) Quit (Quit: Leaving.)
[14:37] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[14:40] * KevinPerks (~Adium@cpe-071-071-026-213.triad.res.rr.com) has joined #ceph
[14:42] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[14:43] * thomnico (~thomnico@82.166.93.197) Quit (Quit: Ex-Chat)
[14:43] * thomnico (~thomnico@82.166.93.197) has joined #ceph
[14:43] * hybrid512 (~walid@195.200.167.70) Quit (Quit: Leaving.)
[14:43] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[14:44] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[14:44] * swami1 (~swami@49.32.0.177) Quit (Quit: Leaving.)
[14:48] <stannum> pls help, recently found that total RAW space and space used does not corresponding to each other: http://pastebin.com/8N38u96A
[14:50] <stannum> 242Tb (Total) - 222Tb (avaiable) = 20Tb, but RAW used is only 7,8 Tb
[14:51] * jskinner (~jskinner@host-95-2-129.infobunker.com) has joined #ceph
[14:52] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[14:55] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[14:57] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[14:58] * georgem (~Adium@fwnat.oicr.on.ca) Quit ()
[14:58] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[14:59] * sjm (~sjm@pool-98-109-11-113.nwrknj.fios.verizon.net) has joined #ceph
[15:01] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[15:02] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[15:04] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[15:05] <flaf> sugoruyo: not for me. If I run "rados -p <pool> stat foo", if have "error stat-ing <pool>/foo: (2) No such file or directory". I use ceph firefly 0.80.8 on Ubuntu Trusty.
[15:07] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[15:09] * jwilkins (~jwilkins@216.1.187.164) has joined #ceph
[15:09] * tupper (~tcole@108-83-203-37.lightspeed.rlghnc.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[15:11] <sugoruyo> flaf: if I do `rados -p POOL stat oaiusdjaoiz` and then a `rados -p POOL ls`, I get an object named 'oaiusdjaoiz' in the output
[15:12] <flaf> stannum: sorry I don't understand your offset. I have not a such offset in my (testing) cluster (SIZE=5172G, AVAIL=5108G and RAW_USED=65593M), but the sizes are much smaller than the sizes in your cluster. It's curious...
[15:14] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[15:14] * nitti (~nitti@162.222.47.218) has joined #ceph
[15:15] <flaf> sugoruyo: no for me. After `rados -p POOL stat foobar1234`, I have not "foobar1234" object in the pool.
[15:15] <flaf> (I have tested)
[15:16] * thomnico (~thomnico@82.166.93.197) Quit (Ping timeout: 480 seconds)
[15:16] <stannum> flaf: what you mean when say "offset", do not understand
[15:17] <flaf> stannum: sorry for my poor english, I woud say "shift" ("d??calage" in french ;)
[15:18] * jskinner (~jskinner@host-95-2-129.infobunker.com) Quit (Quit: Leaving...)
[15:18] <stannum> my english is not quite well too, because I'm from Russia :)
[15:19] * tupper (~tcole@rtp-isp-nat1.cisco.com) has joined #ceph
[15:19] <flaf> :)
[15:19] * vbellur (~vijay@122.167.67.235) has joined #ceph
[15:20] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[15:20] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[15:21] <flaf> So, your shift between "SIZE-AVAIL" and RAW_USED is very strange, I have no explanation. Sorry.
[15:21] <stannum> thats the question is!
[15:22] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[15:25] <stannum> So can anyone explain me output of 'ceph df' command?: http://pastebin.com/8N38u96A, there is big GAP between available space and used space (20Tb-7.8Tb)
[15:26] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[15:27] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[15:27] <skorgu> looks like you're only using a few percent of the available space with data
[15:29] <stannum> but it does not explain why used space + available space does not equals to total space
[15:29] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[15:29] * markl (~mark@knm.org) has joined #ceph
[15:31] <flaf> And the gap is not insignificant (~12TB).
[15:31] <stannum> exactly
[15:31] <skorgu> ah I misunderstood
[15:31] <skorgu> no idea
[15:32] <flaf> stannum: just for information, what is your ceph version, and your OS?
[15:33] <stannum> debian wheezy and giant release
[15:34] <stannum> ceph version 0.87
[15:34] <flaf> and the kernel version?
[15:34] <stannum> 3.2
[15:34] * georgem (~Adium@fwnat.oicr.on.ca) has left #ceph
[15:34] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) has joined #ceph
[15:35] <stannum> 3.2.65-1+deb7u1
[15:35] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[15:36] <flaf> stannum: maybe you should post your question in the mailing list...
[15:37] <flaf> Maybe it's a bug...
[15:37] * kefu (~kefu@114.92.100.153) has joined #ceph
[15:38] <stannum> maybe
[15:40] * Miouge (~Miouge@94.136.92.20) has joined #ceph
[15:41] <stannum> will post it in ceph-community@ceph.com list
[15:42] * thomnico (~thomnico@82.166.93.197) has joined #ceph
[15:43] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:43] <flaf> stannum: ceph-users@ceph.com I think it's better.
[15:52] * jwilkins (~jwilkins@216.1.187.164) Quit (Ping timeout: 480 seconds)
[15:52] * dgurtner_ (~dgurtner@178.197.225.28) Quit (Read error: Connection reset by peer)
[15:54] * Manshoon (~Manshoon@208.184.50.131) has joined #ceph
[15:55] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[15:57] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[16:01] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) has joined #ceph
[16:03] * thomnico (~thomnico@82.166.93.197) Quit (Read error: No route to host)
[16:04] * thomnico (~thomnico@82.166.93.197) has joined #ceph
[16:11] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[16:13] * primechuck (~primechuc@173-17-128-36.client.mchsi.com) has joined #ceph
[16:15] * joef2 (~Adium@2620:79:0:2420::4) has joined #ceph
[16:19] * dgurtner (~dgurtner@178.197.227.162) has joined #ceph
[16:24] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has joined #ceph
[16:25] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[16:27] * moore (~moore@64.202.160.88) has joined #ceph
[16:31] * ccheng (~ccheng@c-50-165-131-154.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[16:33] * dmsimard_away is now known as dmsimard
[16:33] * via (~via@smtp2.matthewvia.info) Quit (Remote host closed the connection)
[16:34] * via (~via@smtp2.matthewvia.info) has joined #ceph
[16:35] * garphy`aw is now known as garphy
[16:40] * Manshoon (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[16:41] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[16:44] * Amto_res (~amto_res@ks312256.kimsufi.com) has joined #ceph
[16:44] * agshew_ (~agshew@host-69-145-59-76.bln-mt.client.bresnan.net) has joined #ceph
[16:44] * artem (~artem@5.164.208.57) has joined #ceph
[16:45] <Amto_res> Hello, There is a solution to make a snapshot of buckets?
[16:46] <artem> hello. anyone faced with the problem of incorrect time display in rgw usage log?
[16:46] * saltlake (~saltlake@12.250.199.170) has joined #ceph
[16:47] <artem> time show "localtime- 2*(localtime-utc)"
[16:47] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[16:47] <artem> if i change timezone in rgw node on utc, time show correct
[16:47] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[16:47] * ccheng (~ccheng@128.211.165.1) has joined #ceph
[16:48] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:56] * CephTestC (~CephTestC@199.91.185.156) has joined #ceph
[16:58] * avozza (~avozza@a83-160-116-36.adsl.xs4all.nl) Quit (Remote host closed the connection)
[17:04] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:06] * cmorandin (~cmorandin@67.53.158.77.rev.sfr.net) has joined #ceph
[17:10] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:10] * dgurtner (~dgurtner@178.197.227.162) Quit (Read error: Connection reset by peer)
[17:10] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has left #ceph
[17:14] * cok (~chk@2a02:2350:18:1010:20e9:6297:3ba0:e870) Quit (Quit: Leaving.)
[17:14] * artem (~artem@5.164.208.57) Quit (Quit: Leaving)
[17:15] * amote (~amote@1.39.96.72) has joined #ceph
[17:17] * joef2 (~Adium@2620:79:0:2420::4) Quit (Read error: Connection reset by peer)
[17:18] * xarses_ (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:22] * jwilkins (~jwilkins@2600:1012:b05a:e656:ea2a:eaff:fe08:3f1d) has joined #ceph
[17:24] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[17:26] * wangqty (~qiang@111.204.252.6) Quit (Quit: Leaving.)
[17:27] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[17:28] * puffy (~puffy@216.207.42.144) has joined #ceph
[17:34] * jclm (~jclm@209.49.224.62) has joined #ceph
[17:37] * avozza (~avozza@static-114-198-78-212.thenetworkfactory.nl) has joined #ceph
[17:37] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) has joined #ceph
[17:38] * joshd1 (~jdurgin@24-205-54-236.dhcp.gldl.ca.charter.com) has joined #ceph
[17:38] * joshd1 (~jdurgin@24-205-54-236.dhcp.gldl.ca.charter.com) Quit ()
[17:38] * thomnico (~thomnico@82.166.93.197) Quit (Ping timeout: 480 seconds)
[17:39] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[17:39] * jwilkins (~jwilkins@2600:1012:b05a:e656:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[17:40] * hybrid512 (~walid@195.200.167.70) Quit (Quit: Leaving.)
[17:41] * i_m (~ivan.miro@deibp9eh1--blueice2n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[17:41] * bearkitten (~bearkitte@cpe-66-27-98-26.san.res.rr.com) has joined #ceph
[17:43] * gregmark (~Adium@68.87.42.115) has joined #ceph
[17:44] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) has joined #ceph
[17:44] * itamarl (~itamar@194.90.7.244) Quit (Quit: Lost terminal)
[17:45] * jtang (~jtang@109.255.42.21) Quit (Ping timeout: 480 seconds)
[17:46] * rmoe (~quassel@173-228-89-134.dsl.static.fusionbroadband.com) Quit (Ping timeout: 480 seconds)
[17:47] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[17:47] * amote (~amote@1.39.96.72) Quit (Quit: Leaving)
[17:47] * agshew_ (~agshew@host-69-145-59-76.bln-mt.client.bresnan.net) Quit (Ping timeout: 480 seconds)
[17:50] * rljohnsn (~rljohnsn@c-73-15-126-4.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:53] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[17:53] * Dasher (~oftc-webi@46.218.69.130) has joined #ceph
[17:55] * xarses_ (~andreww@12.164.168.117) has joined #ceph
[17:55] * jwilkins (~jwilkins@38.122.20.226) has joined #ceph
[17:56] * bandrus (~brian@57.sub-70-211-74.myvzw.com) has joined #ceph
[17:56] * swami1 (~swami@116.75.101.33) has joined #ceph
[17:59] * ircolle (~ircolle@38.122.20.226) has joined #ceph
[18:00] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[18:01] * ircolle (~ircolle@38.122.20.226) Quit ()
[18:01] * ircolle (~ircolle@38.122.20.226) has joined #ceph
[18:01] <Manshoon> @visualne did you get an answer about the mon question you had?
[18:01] <Manshoon> the fact that a down monitor is still in rotation?
[18:02] <Manshoon> i would assume the cluster will keep trying until you drop it completely out of the config
[18:02] <Manshoon> or had you done that and it was still trying to reach that mon
[18:03] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:06] * LeaChim (~LeaChim@host86-159-234-113.range86-159.btcentralplus.com) has joined #ceph
[18:09] * linjan (~linjan@213.8.240.146) has joined #ceph
[18:17] * kefu (~kefu@114.92.100.153) Quit (Max SendQ exceeded)
[18:17] * Manshoon_ (~Manshoon@208.184.50.131) has joined #ceph
[18:18] * Manshoon_ (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[18:18] * Manshoon_ (~Manshoon@199.16.199.4) has joined #ceph
[18:19] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[18:19] * Manshoon (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[18:20] * linjan (~linjan@80.179.241.26) has joined #ceph
[18:23] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[18:25] * Manshoon (~Manshoon@208.184.50.131) has joined #ceph
[18:25] * Manshoon (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[18:26] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[18:27] * vasu (~vasu@38.122.20.226) has joined #ceph
[18:31] * rikai (~Jase@tor-exit-node--proxy.scalaire.com) has joined #ceph
[18:32] * Manshoon_ (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[18:34] * rikai (~Jase@1CIAAGQPQ.tor-irc.dnsbl.oftc.net) Quit ()
[18:37] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[18:39] * Random1 (~Wizeon@108.61.210.123) has joined #ceph
[18:40] * lalatenduM (~lalatendu@122.171.204.100) has joined #ceph
[18:40] * ChrisNBlum (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Remote host closed the connection)
[18:40] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[18:45] * Manshoon (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[18:45] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[18:46] * jordanP (~jordan@213.215.2.194) Quit (Quit: Leaving)
[18:47] * ChrisNBlum (~ChrisNBlu@178.255.153.117) has joined #ceph
[18:49] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) has joined #ceph
[18:50] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[18:51] * vasu (~vasu@38.122.20.226) Quit (Ping timeout: 480 seconds)
[18:52] * Concubidated (~Adium@71.21.5.251) has joined #ceph
[18:55] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[18:56] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit ()
[18:56] * puffy1 (~puffy@216.207.42.129) has joined #ceph
[18:58] * swami1 (~swami@116.75.101.33) Quit (Quit: Leaving.)
[19:02] * puffy (~puffy@216.207.42.144) Quit (Ping timeout: 480 seconds)
[19:05] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[19:06] * brutuscat (~brutuscat@73.Red-81-38-218.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:09] * zok (zok@neurosis.pl) Quit (Ping timeout: 480 seconds)
[19:09] * Random1 (~Wizeon@1CIAAGQP5.tor-irc.dnsbl.oftc.net) Quit ()
[19:10] * bret1 (~Vidi@94.142.242.30) has joined #ceph
[19:11] * vasu (~vasu@38.122.20.226) has joined #ceph
[19:12] * vilobhmm (~vilobhmm@nat-dip33-wl-g.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:14] * zok (zok@neurosis.pl) has joined #ceph
[19:15] * Anticimex (anticimex@95.80.32.80) Quit (Ping timeout: 480 seconds)
[19:17] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[19:19] * Anticimex (anticimex@95.80.32.80) has joined #ceph
[19:27] * vbellur1 (~vijay@122.172.253.142) has joined #ceph
[19:28] * vbellur (~vijay@122.167.67.235) Quit (Ping timeout: 480 seconds)
[19:28] * Miouge (~Miouge@94.136.92.20) Quit (Quit: Miouge)
[19:29] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[19:29] * mykola (~Mikolaj@91.225.201.255) has joined #ceph
[19:30] * Nacer (~Nacer@2001:41d0:fe82:7200:e0bb:d080:d784:4067) has joined #ceph
[19:32] * sugoruyo (~sug_@00014f5c.user.oftc.net) Quit (Quit: Leaving)
[19:33] <sage> jamespage: can you take a final look at that python-ceph split? passes my upgrade test
[19:33] <sage> https://github.com/ceph/ceph/pull/3788
[19:36] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[19:39] * Sysadmin88 (~IceChat77@2.125.213.8) has joined #ceph
[19:39] * vbellur1 (~vijay@122.172.253.142) Quit (Ping timeout: 480 seconds)
[19:39] * bret1 (~Vidi@3N2AABBUZ.tor-irc.dnsbl.oftc.net) Quit ()
[19:40] * measter (~ggg@192.42.116.16) has joined #ceph
[19:40] * branto (~borix@ip-213-220-214-203.net.upcbroadband.cz) has left #ceph
[19:41] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[19:41] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[19:47] * Manshoon (~Manshoon@208.184.50.131) has joined #ceph
[19:49] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[19:51] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit ()
[19:53] * qhartman (~qhartman@den.direwolfdigital.com) Quit (Quit: Ex-Chat)
[19:55] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[19:57] * kevinkevin (52edc5d1@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[19:57] * kevinkevin (52edc5d1@107.161.19.109) has joined #ceph
[20:01] * cholcombe973 (~chris@7208-76ef-ff1f-ed2f-329a-f002-3420-2062.6rd.ip6.sonic.net) has joined #ceph
[20:02] * rljohnsn1 (~rljohnsn@ns25.8x8.com) has joined #ceph
[20:03] * kevinkevin (52edc5d1@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[20:04] * kevinkevin (52edc5d1@107.161.19.109) has joined #ceph
[20:05] * linjan_ (~linjan@213.8.240.146) has joined #ceph
[20:06] * kevinkevin (52edc5d1@107.161.19.109) Quit ()
[20:06] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (Quit: Leaving)
[20:07] * davidz (~davidz@2605:e000:1313:8003:213a:f11e:8cdc:2bad) has joined #ceph
[20:07] * davidzlap1 (~Adium@2605:e000:1313:8003:603b:6fe6:6103:dbe0) Quit (Quit: Leaving.)
[20:07] * kevinkevin (52edc5d1@107.161.19.109) has joined #ceph
[20:08] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit (Ping timeout: 480 seconds)
[20:09] * measter (~ggg@1CIAAGQSZ.tor-irc.dnsbl.oftc.net) Quit ()
[20:10] <baffle> Hi, after a crazy day of cascading but technically unrelated events, leading to network failures, power outages+fluxuations I have 2 broken OSDs (out of 36) wich are now lost. After lots of massaging, I am finally left with 3387 active+clean PGs, 4 incomplete PGs, and one down+incomplete. The incomplete PGs are seemingly looking for writes wich they think are on the two lost/removed OSDs. How can I revert this state?
[20:11] * Manshoon (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[20:12] <baffle> I.e. now missing OSDs are listed under "down_osds_we_would_probe". And the down own has a missing OSD as active/primary..
[20:12] * kevinkevin (52edc5d1@107.161.19.109) Quit ()
[20:12] * kevinkevin (52edc5d1@107.161.19.109) has joined #ceph
[20:12] * rljohnsn1 (~rljohnsn@ns25.8x8.com) Quit (Read error: Connection reset by peer)
[20:12] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[20:12] <baffle> They still think it is up.
[20:12] * linjan (~linjan@80.179.241.26) Quit (Ping timeout: 480 seconds)
[20:12] * xarses_ (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[20:14] * kevinkevin (52edc5d1@107.161.19.109) Quit ()
[20:14] * clusterfudge (~Kidlvr@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[20:15] * macjack (~Thunderbi@123.51.160.200) Quit (Quit: macjack)
[20:15] * yghannam (~yghannam@0001f8aa.user.oftc.net) has joined #ceph
[20:17] <baffle> Query details on a PG: http://pastebin.com/hFWJBswN .. If someone could help that would be awesome.
[20:18] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[20:18] * Manshoon_ (~Manshoon@208.184.50.131) has joined #ceph
[20:19] * Manshoon_ (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[20:19] * togdon (~togdon@74.121.28.6) has joined #ceph
[20:19] * Manshoon_ (~Manshoon@199.16.199.4) has joined #ceph
[20:20] * dalgaaf (uid15138@id-15138.charlton.irccloud.com) has joined #ceph
[20:21] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[20:21] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit ()
[20:21] * TMM_ (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[20:23] * xarses_ (~andreww@12.164.168.117) has joined #ceph
[20:23] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[20:25] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) Quit (Quit: Ex-Chat)
[20:26] * Manshoon (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[20:26] * nhm (~nhm@65-128-165-174.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[20:27] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) has joined #ceph
[20:28] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[20:28] * kawa2014 (~kawa@90.216.134.197) Quit (Quit: Leaving)
[20:29] * segutier_ (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) has joined #ceph
[20:29] * linjan_ (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[20:29] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[20:29] * PaulC (~paul@122-60-36-115.jetstream.xtra.co.nz) Quit (Ping timeout: 480 seconds)
[20:29] * avozza (~avozza@static-114-198-78-212.thenetworkfactory.nl) Quit (Remote host closed the connection)
[20:31] * Manshoon (~Manshoon@208.184.50.131) has joined #ceph
[20:32] * segutier (~segutier@c-24-6-218-139.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[20:32] * segutier_ is now known as segutier
[20:34] * Manshoon (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[20:35] * nhm (~nhm@65-128-142-103.mpls.qwest.net) has joined #ceph
[20:35] * ChanServ sets mode +o nhm
[20:35] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[20:36] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[20:36] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[20:37] * ChrisNBl_ (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[20:38] * Manshoon_ (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[20:38] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[20:38] * bdonnahue (~James@24-148-51-171.c3-0.mart-ubr1.chi-mart.il.cable.rcn.com) has joined #ceph
[20:39] <bdonnahue> hello is it possible to install ceph on LUKS?
[20:40] <bdonnahue> here is a pastebin of my osds being prepared
[20:40] <bdonnahue> here is a pastebin http://pastebin.com/UPtqDDsU
[20:40] <bdonnahue> im not sure what is giong wrong. i thought luks would abstract the disk etc
[20:42] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[20:43] <saltlake> Champs: What is the expectation if I have a particular rbd mapped onto 2 different clients and mounted ?
[20:43] * ChrisNBlum (~ChrisNBlu@178.255.153.117) Quit (Ping timeout: 480 seconds)
[20:44] * clusterfudge (~Kidlvr@3N2AABBYI.tor-irc.dnsbl.oftc.net) Quit ()
[20:46] <fghaas> saltlake: the expectation is that you'd get your backups out shortly
[20:46] <fghaas> bdonnahue: yes, and in fact this is supported in ceph-deploy
[20:46] <fghaas> bdonnahue: see http://ceph.com/docs/master/rados/deployment/ceph-deploy-osd/ -- look for "dm-crypt"
[20:47] <saltlake> fghaas: "backups out shortly?"
[20:48] <saltlake> fghaas: Will there be file system issues since the same rbd is mounted multiple places ? I think It could help me create a failover system in case one hosting server with the rbd gets dead.
[20:49] <burley_> saltlake: Unless your filesystem on the rbd allows for multiple mounts, you'll destroy it
[20:49] <fghaas> if you have a non-cluster filesystem on that rbd (ext4, xfs, whatever) and you have mounted it on 2 servers, chances are it's shredded already
[20:50] <saltlake> fghaas: I do have ext4 so it is a bad idea to have it mounted 2 places :-) What is a good choice if I do what it mounted 2 places for it so be consistent ?
[20:50] <saltlake> "to be consistent"
[20:51] <saltlake> fghaas: What if it is NOT mounted but just mapped ?Is it still at risk ?
[20:51] <fghaas> there is no good choice, any choice you make that won't shred your fs will turn you into an alcoholic or worse
[20:51] <fghaas> yeah mapping is fine
[20:51] <burley_> saltlake: you could use drbd and 2 RBDs; you could use pacemaker and fail the rbd mount over, or you could use a clustered filesystem
[20:51] <fghaas> but if you actually open the block device like a filesystem mount would, it would have to be OCFS2 or GFS2 to work, and that opens a can of worms
[20:51] <saltlake> fghaas: :-) :-) Thanks I won't mount it .. since I am total teetotaler :-)
[20:52] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[20:53] <saltlake> burley_: I saw Seb's blog on setting up drbd and pacemaker.. do you have a pointer to how to set it up ? It seemed like it was wip. I also think I saw some thing about inktank/redhat working to build DR features into ceph
[20:54] <burley_> saltlake: I don't have a good example handy, I had to stumble through getting pacemaker to work with some older examples using commands that have been updated/replaced for sharing RBDs among NFS servers
[20:54] <burley_> and tbh, its not working great
[20:55] * fghaas (~florian@91-119-130-192.dynamic.xdsl-line.inode.at) has left #ceph
[20:55] <bdonnahue> fghaas: thanks looking into this now
[20:55] * elder_ (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[20:56] <saltlake> burley_: Thanks.. I have done a drbd +heartbeat setup with regular drives and it was problematic with that too .. and am worried about making promises to make drbd work with ceph rbds in fear of data loss of some sort.. hence I am inclined to look for a DR feature part of ceph inherenalty or someting that makes it an integral part of it.
[20:56] <saltlake> burley_: (BTW I am not drunk but my fingers and spellings are not working properly today)
[20:57] <saltlake> fghaas: thanks for helping me :-)
[20:58] <saltlake> burley_:thanks for drbd tip..
[20:59] * Nacer (~Nacer@2001:41d0:fe82:7200:e0bb:d080:d784:4067) Quit (Remote host closed the connection)
[21:00] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:05] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[21:05] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[21:08] * Mariane21 (~Mariane21@95.141.20.199) has joined #ceph
[21:08] <Mariane21>
[21:10] * Mariane21 (~Mariane21@95.141.20.199) Quit (autokilled: Please do not spam on IRC. Email support@oftc.net with questions. (2015-02-25 20:10:17))
[21:10] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[21:11] <bdonnahue> fghaas: i looked at the page and did some google'ing but i dont see any examples of how to set the hash or encryption algo etc?
[21:11] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[21:11] * smokedmeets (~smokedmee@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[21:11] <bdonnahue> im new to encryption so perhapse i am missing something
[21:11] * smokedmeets (~smokedmee@c-67-174-241-112.hsd1.ca.comcast.net) has left #ceph
[21:12] * rljohnsn (~rljohnsn@ns25.8x8.com) Quit ()
[21:13] * togdon (~togdon@74.121.28.6) has joined #ceph
[21:13] * rljohnsn (~rljohnsn@ns25.8x8.com) has joined #ceph
[21:14] * Tenk (~cooey@ks4003088.ip-142-4-208.net) has joined #ceph
[21:16] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[21:17] * Manshoon (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[21:18] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[21:20] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[21:20] * jwilkins (~jwilkins@38.122.20.226) Quit (Ping timeout: 480 seconds)
[21:21] * Manshoon (~Manshoon@208.184.50.131) has joined #ceph
[21:24] * Manshoon (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[21:24] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[21:25] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[21:26] * rwheeler (~rwheeler@173.48.208.246) Quit (Quit: Leaving)
[21:26] * agshew_ (~agshew@host-69-145-59-76.bln-mt.client.bresnan.net) has joined #ceph
[21:26] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[21:29] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[21:30] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[21:30] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[21:30] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[21:33] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:34] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[21:36] * jwilkins (~jwilkins@38.122.20.226) has joined #ceph
[21:37] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[21:40] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[21:44] * Tenk (~cooey@3N2AABB1N.tor-irc.dnsbl.oftc.net) Quit ()
[21:44] * togdon (~togdon@74.121.28.6) has joined #ceph
[21:46] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[21:47] * PaulC (~paul@122-60-36-115.jetstream.xtra.co.nz) has joined #ceph
[21:48] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[21:50] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) Quit (Remote host closed the connection)
[21:50] * Manshoon_ (~Manshoon@199.16.199.4) has joined #ceph
[21:53] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[21:53] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[21:53] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[21:55] * Manshoon (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[21:55] * elder_ (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[21:56] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[21:58] * lalatenduM (~lalatendu@122.171.204.100) Quit (Quit: Leaving)
[21:59] * lalatenduM (~lalatendu@122.171.204.100) has joined #ceph
[22:00] * ganders (~root@200.32.121.70) Quit (Quit: WeeChat 0.4.2)
[22:02] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[22:02] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[22:02] * dyasny (~dyasny@173.231.115.58) has joined #ceph
[22:02] * togdon (~togdon@74.121.28.6) has joined #ceph
[22:04] * Manshoon (~Manshoon@208.184.50.131) has joined #ceph
[22:05] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:05] * Manshoon (~Manshoon@208.184.50.131) Quit (Remote host closed the connection)
[22:05] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[22:06] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[22:06] * bsanders (~bsanders@ip68-7-69-80.sd.sd.cox.net) has joined #ceph
[22:07] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[22:08] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[22:08] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[22:08] * cookednoodles (~eoin@89-93-153-201.hfc.dyn.abo.bbox.fr) Quit (Quit: Ex-Chat)
[22:08] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[22:09] <bsanders> Had two questions about RBD: 1) is the block prefix for an RBD image unique across the cluster? and 2) Is there a way to query (REST, maybe?) RBD to find out what RBD images are available, what pools they belong to, and what theyir block name prefixes are?
[22:10] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[22:11] * nitti_ (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) has joined #ceph
[22:11] * agshew_ (~agshew@host-69-145-59-76.bln-mt.client.bresnan.net) Quit (Ping timeout: 480 seconds)
[22:11] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Read error: No route to host)
[22:11] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[22:12] * nitti_ (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) Quit ()
[22:12] * dmsimard is now known as dmsimard_away
[22:12] * Manshoon_ (~Manshoon@199.16.199.4) Quit (Ping timeout: 480 seconds)
[22:13] <burley_> bsanders: "rbd -p POOLNAME list" will list all the images in a pool, paired with "ceph osd lspools" you can list all the available images via cli
[22:15] <bsanders> burley_: thanks, I know about that one, as well as rbd info <...> to get the block prefix. I was hoping to get access from a node without Ceph installed at all, via HTTP or something.
[22:15] * nitti_ (~nitti@162.222.47.218) has joined #ceph
[22:15] <burley_> bsanders: can't help you there then
[22:15] <bsanders> burley_: thats ok, appreciate the help anyway. :)
[22:15] * N3X15 (~Salamande@ns317502.ip-91-121-104.eu) has joined #ceph
[22:16] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Remote host closed the connection)
[22:16] <burley_> bsanders: http://ceph.com/docs/master/rbd/librbdpy/ ?
[22:17] <burley_> but then the node would still need to be part of it
[22:17] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[22:18] * nitti (~nitti@162.222.47.218) Quit (Ping timeout: 480 seconds)
[22:18] <bsanders> Yup. If we need to, we might use something like librbdpy to make our own flask app
[22:19] * togdon (~togdon@74.121.28.6) has joined #ceph
[22:19] * cdelatte (~cdelatte@204-235-114.64.twcable.com) Quit (Ping timeout: 480 seconds)
[22:19] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[22:19] <burley_> I suspect the answer is no, at least not in ceph proper, since I don't see anything in the docs about it in a quick look
[22:20] <bsanders> I think you're right. It's not one of the things that 'ceph-rest-api' displays either.
[22:21] <burley_> looks like calamari has one too, but haven't looked at what it provides
[22:21] * dyasny (~dyasny@173.231.115.58) Quit (Ping timeout: 480 seconds)
[22:22] * hellertime (~Adium@a72-246-185-10.deploy.akamaitechnologies.com) has joined #ceph
[22:22] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[22:22] <joshd> bsanders: I don't think it's exposed via calamari's api either, but it'd be useful to have there or ceph-rest-api
[22:23] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[22:23] <bsanders> joshd: ok, so no specific reason it isn't there, just nobody has gotten around to implementing it?
[22:23] <joshd> exactly
[22:23] * peeejayz (~peeejayz@vpn-2-236.rl.ac.uk) has joined #ceph
[22:24] <peeejayz> Hi All, I was wondering is there a recommended way for shutting down a cluster? I have to take mine down for urgent power maintenance + ups upgrades.
[22:25] <bsanders> joshd: thanks! If we do end up implementing, I'll see if I can hurd the cats toward doing it in one of those two places and doing a PR.
[22:25] <joshd> bsanders: great!
[22:25] <peeejayz> I'm guessing just set no-out and then shut them all down at the same time. And bring them up all together and let ceph repair its self?
[22:25] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[22:26] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[22:26] <bsanders> peeejayz: not sure if its the proper way, but its how I've been doing it.
[22:26] <bsanders> (and don't forget to unset noout :)
[22:27] * oblu (~o@62.109.134.112) has joined #ceph
[22:27] * bsanders is now known as bsanders_afk
[22:28] <peeejayz> bsanders: thats what I thought. Luckily I only have a small 5 node 130tb cluster currently so when it comes back up it shouldn't be too bad hopefully. have you had any problems when it comes back up?
[22:29] * lifeboy (~roland@196.45.29.44) Quit (Quit: Ex-Chat)
[22:31] <bdonnahue> can anyone explain how the dmcrypt arg works for osd prepare?
[22:31] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (Quit: Lost terminal)
[22:32] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[22:32] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[22:32] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[22:33] * garphy is now known as garphy`aw
[22:33] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) Quit ()
[22:34] * sputnik13 (~sputnik13@c-73-193-97-20.hsd1.wa.comcast.net) has joined #ceph
[22:34] * smokedmeets (~smokedmee@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[22:37] * smokedmeets (~smokedmee@c-67-174-241-112.hsd1.ca.comcast.net) Quit ()
[22:38] * smokedmeets (~smokedmee@c-67-174-241-112.hsd1.ca.comcast.net) has joined #ceph
[22:42] * thb (~me@2a02:2028:2f:30c1:b973:adb6:8ca6:702d) has joined #ceph
[22:44] <bdonnahue> Error: Device /dev/xvdb2 is in use by a device-mapper mapping (dm-crypt?): dm-3
[22:44] * N3X15 (~Salamande@2BLAAFW0P.tor-irc.dnsbl.oftc.net) Quit ()
[22:44] * ItsCriminalAFK (~hifi@95.130.15.96) has joined #ceph
[22:44] <bdonnahue> im not sure why im seeing this error. LUKS says that is not a device
[22:45] * lalatenduM (~lalatendu@122.171.204.100) Quit (Quit: Leaving)
[22:46] * as2196 (~as2196@204.91.28.100) has joined #ceph
[22:47] * georgem (~Adium@184.151.178.243) has joined #ceph
[22:47] <as2196> hi, ceph newbie here, i wanted to start testing ceph to replace our current storage backend and had a couple of questions
[22:47] * hellertime (~Adium@a72-246-185-10.deploy.akamaitechnologies.com) Quit (Read error: Connection reset by peer)
[22:47] * hellertime (~Adium@pool-173-48-56-84.bstnma.fios.verizon.net) has joined #ceph
[22:48] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) has joined #ceph
[22:48] <Sysadmin88> ask your questions
[22:48] <as2196> we use centos, what version is going to be the least painful 6 or 7?
[22:48] <as2196> its a POC and I want to be able to get decent performance out of erasure coded pools
[22:48] * saltlake (~saltlake@12.250.199.170) Quit (Ping timeout: 480 seconds)
[22:49] <as2196> also is the stock kernel good enough and test on centos or should be looking at mainline
[22:49] * mykola (~Mikolaj@91.225.201.255) Quit (Quit: away)
[22:49] <as2196> btfs vs xfs vs ext4 vs ext4 wo/ journaling
[22:50] <peeejayz> Anyone know where the release notes for 0.87.1 are? I've just seen its available from update now. So as I have my cluster in Maintenance its a good time to do it too
[22:50] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Quit: Exeunt dneary)
[22:51] <burley_> as2196: We use centos 7 with the latest stock kernel, with xfs since its the most tested of the options
[22:51] <as2196> burley_, thanks - any other pointers would be helpful as well, doing RTFM right now
[22:52] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[22:53] <burley_> I can tell you we've hit a few odd issues, one related to page allocation failures on the OSD nodes, which then cause performance to tank -- increasing the vm.min_free_kbytes value to something around 1GB/OSD seems to help a bit, but wastes a lot of memory
[22:53] <burley_> and making sure vm.zone_reclaim_mode is set to 0
[22:53] <as2196> the boxes i am testing are Supermicro 4U with 36 drives 24 spinning and 12 SSD, this is what we used previously
[22:54] <as2196> they have 512G of RAM and dual 8 core
[22:54] <as2196> thats what I have to work with right now but that gets revisited later
[22:55] <as2196> what do you use for config management and deployments?
[22:57] * togdon (~togdon@74.121.28.6) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[22:58] * brutuscat (~brutuscat@174.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[23:01] * puffy (~puffy@216.207.42.144) has joined #ceph
[23:02] * joshd (~jdurgin@38.122.20.226) Quit (Quit: Leaving.)
[23:03] <burley_> as2196: We have an internal config management system that we wrote many years ago
[23:03] * togdon (~togdon@74.121.28.6) has joined #ceph
[23:03] <bdonnahue> ah i needed a reboot after zapping
[23:04] * togdon (~togdon@74.121.28.6) Quit ()
[23:04] <as2196> burley_: cool, also do you use cache tier and have you played around with erasure coded pools
[23:04] <burley_> we do use cache tiering for one of our data sets, we don't use EC pools -- we just got big enough drives to work around that need
[23:06] <as2196> we have 6TB drives but its a lot data and storing 3 copies is just not cost effective, we currently have 6 cause they are in a DB and thats on 3 way mirror on RAID10
[23:07] * Manshoon (~Manshoon@199.16.199.4) Quit (Remote host closed the connection)
[23:07] * joshd (~jdurgin@38.122.20.226) has joined #ceph
[23:08] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[23:08] * puffy1 (~puffy@216.207.42.129) Quit (Ping timeout: 480 seconds)
[23:08] * Manshoon (~Manshoon@199.16.199.4) Quit (Remote host closed the connection)
[23:09] * Manshoon (~Manshoon@199.16.199.4) has joined #ceph
[23:11] * mfa298_ (~mfa298@gateway.yapd.net) Quit (Ping timeout: 480 seconds)
[23:14] * ItsCriminalAFK (~hifi@3N2AABB6H.tor-irc.dnsbl.oftc.net) Quit ()
[23:15] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[23:15] * Kizzi (~Nanobot@exit1.ipredator.se) has joined #ceph
[23:17] * hellertime (~Adium@pool-173-48-56-84.bstnma.fios.verizon.net) Quit (Quit: Leaving.)
[23:19] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) has joined #ceph
[23:20] * mfa298 (~mfa298@gateway.yapd.net) has joined #ceph
[23:25] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) has joined #ceph
[23:27] * avozza (~avozza@static-114-198-78-212.thenetworkfactory.nl) has joined #ceph
[23:30] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[23:31] * georgem (~Adium@184.151.178.243) Quit (Quit: Leaving.)
[23:32] * nitti_ (~nitti@162.222.47.218) Quit (Remote host closed the connection)
[23:34] * DV (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[23:35] * ChrisNBl_ (~ChrisNBlu@dhcp-ip-230.dorf.rwth-aachen.de) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[23:35] * nitti (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) has joined #ceph
[23:36] * oro (~oro@80-219-254-208.dclient.hispeed.ch) has joined #ceph
[23:38] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (Quit: Leaving)
[23:43] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[23:43] * greatmane (~greatmane@CPE-124-188-114-5.wdcz1.cht.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[23:44] * Kizzi (~Nanobot@2BLAAFW29.tor-irc.dnsbl.oftc.net) Quit ()
[23:44] * Azru (~nicatronT@212.7.194.71) has joined #ceph
[23:48] * Nats (~natscogs@114.31.195.238) has joined #ceph
[23:49] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:50] * Manshoon (~Manshoon@199.16.199.4) Quit (Remote host closed the connection)
[23:52] * nitti (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[23:52] * nitti (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) has joined #ceph
[23:53] * jdillaman (~jdillaman@pool-108-56-67-212.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[23:54] * avozza (~avozza@static-114-198-78-212.thenetworkfactory.nl) Quit (Remote host closed the connection)
[23:54] * nitti (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) Quit (Read error: Connection reset by peer)
[23:54] * nitti (~nitti@173-160-123-93-Minnesota.hfc.comcastbusiness.net) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.