#ceph IRC Log

Index

IRC Log for 2015-07-22

Timestamps are in GMT/BST.

[0:01] * rlrevell (~leer@184.52.129.221) has joined #ceph
[0:02] * fmanana (~fdmanana@bl13-155-79.dsl.telepac.pt) Quit (Ping timeout: 480 seconds)
[0:07] * ircolle (~Adium@2601:285:201:2bf9:1cae:844d:bff8:d8ad) Quit (Quit: Leaving.)
[0:10] * sleinen1 (~Adium@2001:620:0:69::102) Quit (Ping timeout: 480 seconds)
[0:13] * ircolle (~Adium@2601:285:201:2bf9:1cae:844d:bff8:d8ad) has joined #ceph
[0:19] * garphy is now known as garphy`aw
[0:20] * reed (~reed@75-104-70-4.mobility.exede.net) Quit (Ping timeout: 482 seconds)
[0:23] * moore (~moore@64.202.160.88) Quit (Read error: Connection reset by peer)
[0:24] * moore (~moore@64.202.160.88) has joined #ceph
[0:24] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[0:25] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[0:27] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:29] * gucki (~smuxi@212-51-155-85.fiber7.init7.net) Quit (Ping timeout: 480 seconds)
[0:32] * Concubidated (~Adium@161.225.196.30) Quit (Quit: Leaving.)
[0:33] * bro_ (~flybyhigh@panik.darksystem.net) has joined #ceph
[0:34] * o0c_ (~o0c@chris.any.mx) has joined #ceph
[0:34] * DLange (~DLange@dlange.user.oftc.net) Quit (Read error: Connection reset by peer)
[0:34] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[0:35] * wogri_ (~wolf@nix.wogri.at) has joined #ceph
[0:35] * wogri (~wolf@nix.wogri.at) Quit (Read error: Connection reset by peer)
[0:35] * o0c (~o0c@chris.any.mx) Quit (Ping timeout: 480 seconds)
[0:35] * bro__ (~flybyhigh@panik.darksystem.net) Quit (Read error: Connection reset by peer)
[0:37] * psiekl (psiekl@wombat.eu.org) Quit (Ping timeout: 480 seconds)
[0:38] * psiekl (psiekl@wombat.eu.org) has joined #ceph
[0:42] * TheSov (~TheSov@204.13.200.248) Quit (Read error: Connection reset by peer)
[0:46] * Tumm (~rf`@80.82.70.202) has joined #ceph
[0:48] * jvilla (~juvilla@157.166.167.129) Quit (Ping timeout: 480 seconds)
[0:49] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Remote host closed the connection)
[0:50] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) has joined #ceph
[0:50] * vata (~vata@207.96.182.162) Quit (Quit: Leaving.)
[0:51] * rendar (~I@host249-36-dynamic.31-79-r.retail.telecomitalia.it) Quit ()
[0:57] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Read error: Connection reset by peer)
[0:58] * haomaiwang (~haomaiwan@60-250-10-249.HINET-IP.hinet.net) Quit (Ping timeout: 480 seconds)
[1:00] * Concubidated (~Adium@66-87-144-189.pools.spcsdns.net) has joined #ceph
[1:02] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:03] * ircolle (~Adium@2601:285:201:2bf9:1cae:844d:bff8:d8ad) Quit (Quit: Leaving.)
[1:04] * xarses (~xarses@12.164.168.117) Quit (Ping timeout: 480 seconds)
[1:05] * rlrevell (~leer@184.52.129.221) has joined #ceph
[1:16] * Tumm (~rf`@80.82.70.202) Quit ()
[1:17] * oms101 (~oms101@p20030057EA71B300EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:17] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[1:20] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[1:26] * oms101 (~oms101@p20030057EA0A5900EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:26] * rlrevell (~leer@184.52.129.221) has joined #ceph
[1:27] * zaitcev (~zaitcev@c-76-113-49-212.hsd1.nm.comcast.net) Quit (Quit: Bye)
[1:31] * danieagle (~Daniel@187.101.45.207) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[1:32] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[1:41] * CheKoLyN (~saguilar@bender.parc.xerox.com) Quit (Ping timeout: 480 seconds)
[1:41] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[1:44] * pmatulis (~peter@ec2-23-23-42-7.compute-1.amazonaws.com) Quit (Remote host closed the connection)
[1:45] * pmatulis (~peter@ec2-23-23-42-7.compute-1.amazonaws.com) has joined #ceph
[1:48] * jclm (~jclm@ip24-253-98-109.lv.lv.cox.net) Quit (Read error: Connection reset by peer)
[1:48] * nils_ (~nils@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[1:49] * jclm (~jclm@ip24-253-98-109.lv.lv.cox.net) has joined #ceph
[1:49] * thansen (~thansen@17.253.sfcn.org) Quit (Quit: Ex-Chat)
[1:51] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[1:51] * Concubidated (~Adium@66-87-144-189.pools.spcsdns.net) Quit (Quit: Leaving.)
[1:52] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[1:53] * shaunm (~shaunm@74.215.76.114) has joined #ceph
[2:03] <Kupo1> Would there be any issues with hostnames changing on ceph monitors but ip's remaining the same?
[2:04] * jvilla (~juvilla@c-73-7-96-50.hsd1.ga.comcast.net) has joined #ceph
[2:06] * nsoffer (~nsoffer@bzq-109-65-255-114.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[2:06] * rlrevell (~leer@184.52.129.221) has joined #ceph
[2:11] * moore (~moore@64.202.160.88) Quit (Remote host closed the connection)
[2:15] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: leaving)
[2:16] * fam_away is now known as fam
[2:31] * lucas1 (~Thunderbi@218.76.52.64) has joined #ceph
[2:34] * rlrevell (~leer@184.52.129.221) Quit (Quit: Leaving.)
[2:34] * rlrevell (~leer@184.52.129.221) has joined #ceph
[2:37] * jvilla (~juvilla@c-73-7-96-50.hsd1.ga.comcast.net) Quit (Quit: jvilla)
[2:45] * jvilla (~juvilla@157.166.175.129) has joined #ceph
[2:45] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[2:49] * jclm (~jclm@ip24-253-98-109.lv.lv.cox.net) Quit (Read error: Connection reset by peer)
[2:49] * jclm (~jclm@ip24-253-98-109.lv.lv.cox.net) has joined #ceph
[3:01] * sean_ (~seapasull@95.85.33.150) Quit (Remote host closed the connection)
[3:05] * rlrevell (~leer@184.52.129.221) has joined #ceph
[3:10] * zhaochao (~zhaochao@111.161.77.231) has joined #ceph
[3:10] * kutija (~kutija@95.180.90.38) Quit (Ping timeout: 480 seconds)
[3:15] * capri (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[3:17] * shyu (~Shanzhi@119.254.120.66) has joined #ceph
[3:19] * yguang11 (~yguang11@2001:4998:effd:600:a090:517d:fd9:5edd) Quit ()
[3:25] * rlrevell (~leer@184.52.129.221) Quit (Read error: Connection reset by peer)
[3:25] * kefu (~kefu@114.86.210.64) has joined #ceph
[3:25] * kefu is now known as kefu|afk
[3:26] * sjm (~sjm@pool-100-1-115-73.nwrknj.fios.verizon.net) has joined #ceph
[3:32] * kefu|afk is now known as kefu
[3:40] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[3:40] * kefu (~kefu@114.86.210.64) has joined #ceph
[3:46] * rektide_ is now known as rektide
[3:47] * yanzheng (~zhyan@182.139.205.112) has joined #ceph
[3:50] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[3:56] * scuttlemonkey is now known as scuttle|afk
[3:57] <evilrob00> my monitor nodes have 2 IPs. I need to change the IP in the ceph.conf to point to the other interface. can I just edit the ceph.conf?
[3:58] <snakamoto> I believe you need to edit the monmap
[3:59] * evilrob00 is ceph n00b.
[3:59] <evilrob00> I forgot that was a thing
[3:59] * evilrob00 will look it up.
[3:59] <evilrob00> thanks for the pointer
[4:00] <snakamoto> np =D
[4:07] * DV_ (~veillard@2001:41d0:a:f29f::1) Quit (Remote host closed the connection)
[4:08] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[4:13] <evilrob00> snakamoto: so I can use monmaptool to add monitors to the file, but when I try to remove them it claims that the map doesn't contain that named mon
[4:14] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[4:15] <snakamoto> I've only done adds before, in a test environment
[4:15] <snakamoto> Never tried removes
[4:15] <evilrob00> frustrating
[4:19] * rushworld (~Azerothia@freedom.ip-eend.nl) has joined #ceph
[4:22] <evilrob00> so don't use monmaptool. ceph mon remove, and ceph mon add
[4:23] * DV_ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[4:23] <snakamoto> that worked?
[4:25] <evilrob00> for one of them
[4:26] <evilrob00> I did the second remove, and in trying to replace it I get error connecting to cluster
[4:28] <evilrob00> I'm contemplating just blowing it all away. it's not production yet.
[4:36] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:37] * kefu_ (~kefu@120.204.164.217) has joined #ceph
[4:37] * Steki (~steki@cable-89-216-232-79.dynamic.sbb.rs) has joined #ceph
[4:39] * zaitcev (~zaitcev@c-76-113-49-212.hsd1.nm.comcast.net) has joined #ceph
[4:42] <Nats_> the problem you will encounter is if the monitor on old ip cannot talk to monitor on new ip
[4:43] <Nats_> ie, if there's no route between the old and new ip's
[4:43] <Nats_> then you'll lose quorum
[4:43] * BManojlovic (~steki@87.116.176.82) Quit (Ping timeout: 480 seconds)
[4:43] <evilrob00> right now mon2 and mon3 won't even start
[4:44] <Nats_> so if its a test env i would just start over
[4:44] * kefu (~kefu@114.86.210.64) Quit (Ping timeout: 480 seconds)
[4:44] <Nats_> otherwise, you'd have to at least temporarily provide routing between the the new and old ip ranges
[4:44] <snakamoto> I didn't think of that. There is only one cluster network
[4:45] <evilrob00> yeah. I could probably hack up a new monmap, start ceph-mon with the new one use that to sort out removing, but with the ease of ceph-deploy, I'll just shit on it all
[4:46] <evilrob00> I'll have to remake pools and re-do a couple of things for openstack integration (I think?) but it'll take less time
[4:46] <Nats_> in my test env i ended up just making a bash script to kill everything and start over
[4:46] <Nats_> now i probably use that at least once a month
[4:47] * kefu_ (~kefu@120.204.164.217) Quit (Max SendQ exceeded)
[4:48] * kefu (~kefu@120.204.164.217) has joined #ceph
[4:49] * rushworld (~Azerothia@5NZAAFDMW.tor-irc.dnsbl.oftc.net) Quit ()
[4:50] <evilrob00> "ceph-deploy purge <node1> <node2> ... <nodeN>; ceph-deploy purgedata <node1> <node2> ... <nodeN>; ceph-deploy forgetkeys"
[4:55] <evilrob00> the "interesting part" is that I have 34 4TB disks on each node. lucky for me I got smart and wrote a script to set up the OSDs
[5:09] <doppelgrau> evilrob00: why not use some configuration management like ansible/salt/puppet
[5:10] <doppelgrau> evilrob00: kombined with automatic deployment of the base system (e.g. fai, kickstart) thats very handy
[5:13] * overclk (~overclk@121.244.87.117) has joined #ceph
[5:14] * kefu_ (~kefu@114.86.210.64) has joined #ceph
[5:17] <Regotesolcmai> Quick question: is deep scrubbing supposed to be capable of identifying inconsistent pg's? (If ya'll don't know I'll just start readin' the source)
[5:18] <snakamoto> This is what was told to us in the Ceph training:
[5:18] <Regotesolcmai> I had a situation where I accidentally corrupted some ceph attributes which set the size of a pg to 0
[5:18] <Regotesolcmai> I wrote a script to handle deep scrubbing of all pg's (N at a time)
[5:19] <snakamoto> the light scrub will go through all of the PGs and verify that they are consistent with each other
[5:19] * kefu (~kefu@120.204.164.217) Quit (Ping timeout: 480 seconds)
[5:19] <snakamoto> the deep scrub will do a checksum verification for all of the objects in each PG
[5:19] <Regotesolcmai> and it didn't find the inconsistency until the normal background scrub finally rolled across it
[5:19] <doppelgrau> Regotesolcmai: ???normal??? scrub: compare meta-data, deep-scrub: compare content (using hashes)
[5:20] <doppelgrau> (from documentation, not source)
[5:20] <Regotesolcmai> I'm guessing the attrs would be classified as metadata?
[5:20] <Regotesolcmai> I think I'll still poke
[5:21] <Regotesolcmai> because it really confused me
[5:23] <Regotesolcmai> ty tho <3.
[5:26] * yghannam (~yghannam@0001f8aa.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:29] * Vacuum_ (~Vacuum@88.130.223.16) has joined #ceph
[5:30] <jvilla> Hey guys. Anyone using the rbdmount script on Ubuntu 14.04 succesfully? I have 5 Ubuntu 14.04 VMs configured to map a single rbd volume. On boot, it???s mapped perfectly and I can mount my device (either manually or via fstab), on shutdown it hangs running ???rbd unmap /dev/rbd1??? and never completes the shut down. :(
[5:31] <jvilla> The process state is ???S (sleeping)??? and strace reports ???wait4(-1,???. If I kill the process I can complete the shut down
[5:31] <doppelgrau> jvilla: don???t use it, but is the network online at the umount-event?
[5:32] <jvilla> yep
[5:32] <jvilla> network is online. I???m still SSHed into the VM
[5:32] <jvilla> what do you recomend as an alternative to rbdmount to mount an rbd volume on boot?
[5:34] <doppelgrau> jvilla: sorry, I only use rbd-Images as backend for VMs, not inside them
[5:34] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[5:34] <jvilla> got it, I guess I???ll keep digging.
[5:34] <doppelgrau> jvilla: but does it work manually
[5:35] <jvilla> yes
[5:35] <jvilla> it work perfectly manually
[5:35] <jvilla> if I run ???umount /mnt && rbd unmap /dev/rbd1??? and then shut down it works correctly
[5:36] * Vacuum__ (~Vacuum@88.130.211.252) Quit (Ping timeout: 480 seconds)
[5:42] * snakamoto (~Adium@192.16.26.2) Quit (Quit: Leaving.)
[5:43] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[5:44] * terje (~root@135.109.216.239) has joined #ceph
[5:46] * terje_ (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[5:47] <jvilla> doppelgrau: I removed the /etc/fstab entry and I am just using the rbdmap init script to map and unmap the rbd volume. Same issue when unmapping on shutdown. The ???rbd unmap??? simply hangs with no logging anywhere. The network is still fully operational. Very strange.
[5:51] * kefu (~kefu@120.204.164.217) has joined #ceph
[5:51] * terje_ (~root@135.109.216.239) has joined #ceph
[5:52] * ketor (~ketor@li218-88.members.linode.com) has joined #ceph
[5:53] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[5:58] * terje (~root@135.109.216.239) has joined #ceph
[5:58] * kefu_ (~kefu@114.86.210.64) Quit (Ping timeout: 480 seconds)
[6:00] * terje_ (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[6:01] * Vacuum_ (~Vacuum@88.130.223.16) Quit (Ping timeout: 480 seconds)
[6:03] <evilrob00> doppelgrau: we use salt in-house. I set this cluster up by hand to learn it, and never played with salt states out there for ceph.
[6:03] * Concubidated (~Adium@66-87-144-189.pools.spcsdns.net) has joined #ceph
[6:03] <evilrob00> so I seem to have made some mistakes setting it up. (first cluster).
[6:04] <evilrob00> I've actually used https://github.com/komljen/ceph-salt in setting up a cluster in vagrant before. I need to play with it more.
[6:04] * kanagaraj (~kanagaraj@121.244.87.117) has joined #ceph
[6:06] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[6:10] * terje (~root@135.109.216.239) has joined #ceph
[6:16] * terje_ (~root@135.109.216.239) has joined #ceph
[6:16] * terje (~root@135.109.216.239) Quit (Read error: Connection reset by peer)
[6:23] * terje (~root@135.109.216.239) has joined #ceph
[6:24] * terje_ (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[6:25] * fam is now known as fam_away
[6:28] * Gibri (~Bj_o_rn@104.238.169.112) has joined #ceph
[6:32] * terje_ (~root@135.109.216.239) has joined #ceph
[6:34] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[6:54] * kefu (~kefu@120.204.164.217) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:58] * Gibri (~Bj_o_rn@104.238.169.112) Quit ()
[6:59] * kefu (~kefu@120.204.164.217) has joined #ceph
[7:00] * chasmo77 (~chas77@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[7:00] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[7:02] * kutija (~kutija@95.180.90.38) has joined #ceph
[7:03] * sjm (~sjm@pool-100-1-115-73.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[7:07] * jvilla (~juvilla@157.166.175.129) Quit (Quit: jvilla)
[7:11] * kefu (~kefu@120.204.164.217) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[7:11] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[7:11] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[7:21] * zaitcev (~zaitcev@c-76-113-49-212.hsd1.nm.comcast.net) Quit (Quit: Bye)
[7:21] * kefu (~kefu@120.204.164.217) has joined #ceph
[7:25] * terje (~root@135.109.216.239) has joined #ceph
[7:27] * terje_ (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[7:28] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:32] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:35] * Kingrat (~shiny@2605:a000:161a:c022:5d4:fd73:460f:ddb7) Quit (Ping timeout: 480 seconds)
[7:38] * terje_ (~root@135.109.216.239) has joined #ceph
[7:40] * Kingrat (~shiny@cpe-74-129-220-71.kya.res.rr.com) has joined #ceph
[7:40] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[7:42] * kefu (~kefu@120.204.164.217) Quit (Max SendQ exceeded)
[7:43] * kefu (~kefu@120.204.164.217) has joined #ceph
[7:43] * gucki (~smuxi@212-51-155-85.fiber7.init7.net) has joined #ceph
[7:45] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[7:45] * boolman (boolman@79.138.78.238) Quit (Read error: Connection reset by peer)
[7:47] * linjan (~linjan@176.195.70.29) has joined #ceph
[7:50] * terje (~root@135.109.216.239) has joined #ceph
[7:53] * terje_ (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[7:56] * terje_ (~root@135.109.216.239) has joined #ceph
[7:58] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[8:01] * muni (~muni@bzq-218-200-252.red.bezeqint.net) has joined #ceph
[8:02] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[8:02] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[8:02] * i_m (~ivan.miro@83.149.35.166) has joined #ceph
[8:04] * terje (~root@135.109.216.239) has joined #ceph
[8:04] * linjan (~linjan@176.195.70.29) Quit (Ping timeout: 480 seconds)
[8:05] * terje_ (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[8:06] * Miouge (~Miouge@91.178.33.59) has joined #ceph
[8:08] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) has joined #ceph
[8:10] * terje (~root@135.109.216.239) Quit (Remote host closed the connection)
[8:10] * terje (~root@135.109.216.239) has joined #ceph
[8:11] * ilken (ilk@2602:63:c2a2:af00:e49b:80e8:6d75:4f7b) Quit (Ping timeout: 480 seconds)
[8:11] * Steki (~steki@cable-89-216-232-79.dynamic.sbb.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[8:13] <ndru> Hello all, working with glance on ceph. It seems if I create a volume from an image which differs in size from the image, it still downloads it to the host and converts it, then imports it. Is there a way to prevent this from happening?
[8:18] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[8:20] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[8:21] * terje (~root@135.109.216.239) has joined #ceph
[8:22] * kefu_ (~kefu@114.86.210.64) has joined #ceph
[8:24] * kefu (~kefu@120.204.164.217) Quit (Read error: No route to host)
[8:25] * kefu (~kefu@120.204.164.217) has joined #ceph
[8:26] * terje_ (~root@135.109.216.239) has joined #ceph
[8:27] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[8:27] * kefu_ (~kefu@114.86.210.64) Quit (Read error: Connection reset by peer)
[8:29] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[8:30] * Vacuum_ (~Vacuum@88.130.223.16) has joined #ceph
[8:33] * kefu (~kefu@120.204.164.217) Quit (Ping timeout: 480 seconds)
[8:35] * terje_ (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[8:38] * kefu__ (~kefu@114.86.210.64) has joined #ceph
[8:39] * kefu__ is now known as kefu
[8:39] * gucki (~smuxi@212-51-155-85.fiber7.init7.net) Quit (Ping timeout: 480 seconds)
[8:39] * terje (~root@135.109.216.239) has joined #ceph
[8:47] * calvinx (~calvin@76.164.201.39) has joined #ceph
[8:48] * rendar (~I@host115-196-dynamic.181-80-r.retail.telecomitalia.it) has joined #ceph
[8:48] * terje_ (~root@135.109.216.239) has joined #ceph
[8:50] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[8:54] * engage (~engage@175.45.101.6) has joined #ceph
[8:55] * cok (~chk@2a02:2350:18:1010:61b6:68f2:587d:1231) has joined #ceph
[8:55] * SinZ|offline (~Xerati@h-213.61.149.100.host.de.colt.net) has joined #ceph
[8:57] * rdas (~rdas@121.244.87.116) has joined #ceph
[8:59] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:00] * dneary_ (~dneary@AGrenoble-651-1-600-61.w90-52.abo.wanadoo.fr) has joined #ceph
[9:00] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) has joined #ceph
[9:04] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[9:05] * sleinen1 (~Adium@2001:620:0:69::106) has joined #ceph
[9:05] * yanzheng1 (~zhyan@182.139.21.196) has joined #ceph
[9:06] * adrian15b (~kvirc@btactic.ddns.jazztel.es) has joined #ceph
[9:07] <engage> Hello. Awesome if anyone could point me in the right direction with this: ???2015-07-22 17:01:19.918443 mon.0 [INF] pgmap v5451560: 591 pgs: 589 active+clean, 1 active+remapped+backfill_toofull, 1 active+clean+scrubbing; 6478 GB data, 19625 GB used, 36237 GB / 55863 GB avail; 6036 kB/s rd, 2434 kB/s wr, 61 op/s; 1/26358793 objects degraded (0.000%)??? Thanks :-)
[9:07] * yanzheng (~zhyan@182.139.205.112) Quit (Ping timeout: 480 seconds)
[9:07] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[9:08] <Be-El> hi
[9:08] * capri (~capri@212.218.127.222) has joined #ceph
[9:09] * bitserker (~toni@88.87.194.130) has joined #ceph
[9:09] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[9:10] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Read error: Connection reset by peer)
[9:11] * derjohn_mob (~aj@fw.gkh-setu.de) has joined #ceph
[9:11] <doppelgrau> engage: one OSD is too full for remapping the PG
[9:14] <engage> Thanks. How would I go about sorting this? I'm struggling to find anything in the troubleshooting docs.
[9:15] * terje (~root@135.109.216.239) has joined #ceph
[9:15] <doppelgrau> engage: ceph osd df might bee a good starting point
[9:16] * bitserker (~toni@88.87.194.130) Quit (Quit: Leaving.)
[9:16] * bitserker (~toni@88.87.194.130) has joined #ceph
[9:16] * sleinen1 (~Adium@2001:620:0:69::106) Quit (Read error: Connection reset by peer)
[9:17] * terje_ (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[9:19] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[9:19] * ChanServ sets mode +o elder
[9:20] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[9:25] * adrian (~abradshaw@130.117.88.81) has joined #ceph
[9:25] <Be-El> ceph health detail should also list all osd that are nearly full
[9:25] * SinZ|offline (~Xerati@5NZAAFDVK.tor-irc.dnsbl.oftc.net) Quit ()
[9:25] * adrian is now known as Guest1005
[9:29] * dgurtner (~dgurtner@178.197.231.188) has joined #ceph
[9:33] <engage> osd.12 is near full at 87%
[9:34] <engage> Sorry for the dumb questions. Still new to Ceph. Its been running for about 12 months and haven't had to fix anything on it yet.
[9:35] * branto (~branto@ip-213-220-232-132.net.upcbroadband.cz) has joined #ceph
[9:35] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:36] * b0e (~aledermue@213.95.25.82) has joined #ceph
[9:38] * jcsp (~jspray@82-71-16-249.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[9:38] * sleinen (~Adium@130.59.94.212) has joined #ceph
[9:39] * sleinen1 (~Adium@2001:620:0:69::104) has joined #ceph
[9:39] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) has joined #ceph
[9:42] * terje_ (~root@135.109.216.239) has joined #ceph
[9:43] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[9:44] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[9:44] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[9:46] * sleinen (~Adium@130.59.94.212) Quit (Ping timeout: 480 seconds)
[9:49] * engage (~engage@175.45.101.6) Quit (Remote host closed the connection)
[9:50] * engage (~engage@175.45.101.6) has joined #ceph
[9:51] * terje_ (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[9:51] * calvinx (~calvin@76.164.201.39) Quit (Quit: calvinx)
[9:53] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[9:57] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[10:04] <Nats_> engage, the simplest solution (assuming you have space on other osd's) is to reweight your full osd
[10:04] <Nats_> ie, reduce its weight, which will push some of its data to other osd's
[10:05] <Nats_> just a 10% reduction would probably do
[10:07] * analbeard (~shw@5.153.255.226) has joined #ceph
[10:11] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[10:16] * jcsp (~jspray@summerhall-meraki1.fluency.net.uk) has joined #ceph
[10:16] <Be-El> or raise the backfill limit to 87% or 88%. keep in mind that the underlying filesystem has to scale well with the amount of free space. btrfs for example does not like to be operated on a full partition
[10:17] * engage (~engage@175.45.101.6) Quit (Remote host closed the connection)
[10:17] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[10:18] * kefu (~kefu@114.86.210.64) has joined #ceph
[10:18] * gucki (~smuxi@office.netskin.com) has joined #ceph
[10:19] * reinhardt1053 (~reinhardt@195.46.234.215) has joined #ceph
[10:19] <reinhardt1053> Hello
[10:20] * engage (~engage@175.45.101.6) has joined #ceph
[10:20] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[10:21] <reinhardt1053> I got a Ceph 3 node cluster, 1 mon and 2 osd: node-1, node-2, node-3
[10:21] <reinhardt1053> node-1 is the monitor node
[10:21] <reinhardt1053> I tried to add another monitor on node-2
[10:22] <reinhardt1053> and now the cluster is down. ceph command hangs and I can't even check the status
[10:23] <reinhardt1053> digging on the mail list I stumbled upon this http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-December/036280.html
[10:24] <Be-El> check that both mon processes are running and have a look at their log files
[10:24] <Be-El> you can also check the status of a running ceph process via its admin socket
[10:30] <reinhardt1053> I did run this command:
[10:31] <reinhardt1053> ceph --admin-daemon /var/run/ceph/ceph-mon.dms1.asok mon_status
[10:31] <reinhardt1053> {
[10:31] <reinhardt1053> "name": "dms1",
[10:31] <reinhardt1053> "rank": 0,
[10:31] <reinhardt1053> "state": "probing",
[10:31] <reinhardt1053> "election_epoch": 0,
[10:31] <reinhardt1053> "quorum": [],
[10:31] <reinhardt1053> "outside_quorum": [
[10:31] <reinhardt1053> "dms1"
[10:31] <reinhardt1053> ],
[10:31] <reinhardt1053> "extra_probe_peers": [],
[10:31] <reinhardt1053> "sync_provider": [],
[10:31] <reinhardt1053> "monmap": {
[10:31] <reinhardt1053> "epoch": 2,
[10:31] <reinhardt1053> "fsid": "0631e431-2612-4eec-b45f-81587d7edf7a",
[10:31] <reinhardt1053> "modified": "2015-07-21 18:05:00.008292",
[10:31] <reinhardt1053> "created": "0.000000",
[10:31] <reinhardt1053> "mons": [
[10:31] <reinhardt1053> {
[10:31] <reinhardt1053> "rank": 0,
[10:31] <reinhardt1053> "name": "dms1",
[10:31] <reinhardt1053> "addr": "10.65.68.205:6789\/0"
[10:31] <reinhardt1053> },
[10:31] <reinhardt1053> {
[10:31] <reinhardt1053> "rank": 1,
[10:32] <reinhardt1053> "name": "dms2",
[10:32] <reinhardt1053> "addr": "10.65.68.206:6789\/0"
[10:32] <reinhardt1053> }
[10:32] <reinhardt1053> ]
[10:32] <reinhardt1053> }
[10:32] <reinhardt1053> }
[10:32] <reinhardt1053> dms2 is the node hostname I was trying to add as monitor
[10:32] <reinhardt1053> and failed
[10:32] * fmanana (~fdmanana@bl13-153-127.dsl.telepac.pt) has joined #ceph
[10:33] <Be-El> reinhardt1053: please use a paste bin for everything longer than 1-2 lines
[10:33] <reinhardt1053> ok
[10:34] <Be-El> reinhardt1053: are both nodes able to contact each other on port 6789?
[10:34] <reinhardt1053> http://pastebin.com/nBhx9p2v
[10:35] <reinhardt1053> how do I test that out?
[10:35] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[10:36] * kefu (~kefu@114.86.210.64) has joined #ceph
[10:36] <Be-El> I usually try a simple 'telnet host port'
[10:36] * engage (~engage@175.45.101.6) Quit (Remote host closed the connection)
[10:36] <Be-El> if that succeeds in creating a connection, you can rule out firewall problems
[10:37] <Be-El> can you also upload the status of the second mon?
[10:38] <reinhardt1053> I can telnet to node dms1 (the previous monitor node). I cannot telnet to node dms2 (the monitor node I failed to add)
[10:39] <reinhardt1053> sure
[10:40] <reinhardt1053> /var/log/ceph/ceph-mon.dms2.log is empty. Any other place where I can find the log?
[10:41] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[10:41] <Be-El> reinhardt1053: is the mon process running on the second host?
[10:43] <reinhardt1053> http://pastebin.com/SUFL5Uxa
[10:46] <reinhardt1053> and this was the command that failed to add the second monitor: http://pastebin.com/7SsDZ5Zt check-deploy hangs on the last command
[10:46] <reinhardt1053> the cluster was health before that
[10:48] <Be-El> so the second mon is not running at all. that also explains why the cluster is not accessible....you do not have a quorum with >50% of the mons
[10:48] <Be-El> .oO ( i know why i cannot stand ceph-deploy.... )
[10:49] <Be-El> reinhardt1053: use http://ceph.com/docs/master/rados/operations/add-or-rm-mons/ to manually finish setting up the second and start it
[10:50] <reinhardt1053> Do I need to remove the monitor on dms2 first?
[10:50] <Be-El> NO
[10:50] * engage (~engage@175.45.101.6) has joined #ceph
[10:51] <Be-El> try to start the mon process manually first. maybe ceph-deploy was able to setup the mon correctly
[10:51] <Be-El> you need _both_ mons up and running for any cluster operation
[10:53] <reinhardt1053> {mon-id} = short hostname ?
[10:53] <Be-El> do what i said
[10:53] <Be-El> try to start the mon process first.
[10:53] <Be-El> that's the last step in the manual
[10:54] <Be-El> if that is not going to work, you have to perform the other steps, too
[10:54] <reinhardt1053> yes, I am gonna execute this command : ceph-mon -i {mon-id} --public-addr {ip:port}
[10:55] <Be-El> i do not use ceph-deploy, so i do not known which hostname it uses
[10:55] <Be-El> but I would bet it the same name you have used for adding the mon
[10:55] <Be-El> it's also the same name the log file and directory in /var/lib/ceph/mon has
[10:56] <reinhardt1053> all right, it should be dms2 in my case
[10:58] <reinhardt1053> ceph-mon -i dms2 --public-addr 10.65.68.206:6789
[10:59] <reinhardt1053> the cluster is healthy again now :-)
[10:59] <Be-El> so both mon processes are up and running now and formed a quorum?
[11:01] <Be-El> you should think about adding the third mon. in the case of two mon, any failure of a single mon will leave your cluster in an inaccessible state
[11:02] <Be-El> the same is also true for any even number of mons, since you need a majority (>50%) of running mons
[11:03] <Be-El> so with 4 mons it's a failure of 2 mon etc.
[11:03] <engage> Nats_ & Be-El: Thanks for the assistance. Should be able to get things rocking again now.
[11:04] * oro (~oro@2001:620:20:16:cd84:5e2e:e3b8:5963) has joined #ceph
[11:04] <Be-El> engage: do you have a heterogenous cluster (different number/sizes of OSDs per host) or just a uneven data distribution?
[11:05] <reinhardt1053> Yes, I am going to add dms3 as monitor node
[11:06] <engage> Each host is exactly the same
[11:07] <Be-El> engage: in that case the reweight-by-utilization may also help. it adjusts the weight of the individual OSDs according to their current utilization
[11:07] <Be-El> engage: but keep in mind that every change of a weight results in some data movement in the cluster
[11:07] * nils_ (~nils@doomstreet.collins.kg) has joined #ceph
[11:08] <engage> I'll look into that. As I said earlier it's the first Ceph cluster I've built so I'm probably not being very nice to it :-)
[11:10] <Be-El> engage: i think my cluster also had a hard time in the first weeks ;-)
[11:11] <Be-El> engage: do you use xfs or btrfs on the osds?
[11:12] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Remote host closed the connection)
[11:13] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) has joined #ceph
[11:13] <reinhardt1053> ceph-deploy mon add dms3 was successful
[11:13] <reinhardt1053> Be-El thank you very much for your assistance :-)
[11:14] <engage> Went with XFS
[11:14] <Be-El> reinhardt1053: you're welcome
[11:14] <engage> Wasn't 100% comfortable with btrfs when I deployed this cluster
[11:14] * capri_on (~capri@212.218.127.222) has joined #ceph
[11:14] <Be-El> engage: i had some problems with btrfs under ubuntu trusty, so using xfs might be the easier choice
[11:15] * yanzheng1 (~zhyan@182.139.21.196) Quit (Ping timeout: 480 seconds)
[11:15] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[11:15] <engage> All things considered it's been working great. 3 nodes with 12 OSDs each. 10M objects so far
[11:15] * shohn (~shohn@164.139.7.150) has joined #ceph
[11:16] * yanzheng1 (~zhyan@107.170.210.7) has joined #ceph
[11:17] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[11:21] * brutuscat (~brutuscat@105.34.133.37.dynamic.jazztel.es) Quit (Ping timeout: 480 seconds)
[11:22] <engage> Be-El: Thanks again for your help. I'll investigate your recommendations tomorrow. Heading home for the day, getting late here.
[11:22] * linjan (~linjan@93-81-96-39.broadband.corbina.ru) has joined #ceph
[11:22] <Be-El> engage: you're welcome
[11:23] * engage (~engage@175.45.101.6) Quit ()
[11:27] * capri_on (~capri@212.218.127.222) Quit (Quit: Leaving)
[11:28] <reinhardt1053> I am trying to expand my cluster, adding an OSD on node dms1
[11:28] <reinhardt1053> the cluster is now on HEALTH_WARN status
[11:29] <reinhardt1053> what are the implications of that? Do I need to put the Ceph Object Gateway in mantainence mode until the rebalancing process is complete?
[11:33] <Be-El> reinhardt1053: if the warning only refers to rebalancing (-> remapped PGs), access to the cluster is available.
[11:34] <Be-El> reinhardt1053: if some PG are not in an active state (or any combination of other states that do not contain active), IO to them will be blocked until the PG is recovered
[11:34] <Be-El> reinhardt1053: backfilling has an impact on client IO performance, but all services should be operational as long as all pgs are acti ve
[11:35] <Be-El> -> off for lunch
[11:36] <reinhardt1053> Bel-El Thank you, that makes sense.
[11:37] * analbeard (~shw@5.153.255.226) Quit (Quit: Leaving.)
[11:38] * yanzheng2 (~zhyan@182.139.21.196) has joined #ceph
[11:40] <reinhardt1053> I had a file uploaded through the s3-compatible interface, 600MB size. I am trying to download it while the status is HEALTH_WARN, but it stuck at 20%
[11:40] <reinhardt1053> this is the output of ceph -s
[11:40] <reinhardt1053> http://pastebin.com/HcBFkaT2
[11:41] * analbeard (~shw@5.153.255.226) has joined #ceph
[11:42] * yanzheng1 (~zhyan@107.170.210.7) Quit (Ping timeout: 480 seconds)
[11:42] * yanzheng2 (~zhyan@182.139.21.196) Quit ()
[11:43] * yanzheng2 (~zhyan@182.139.21.196) has joined #ceph
[11:47] <reinhardt1053> now the file download stuck at 40%
[11:48] * linjan (~linjan@93-81-96-39.broadband.corbina.ru) Quit (Ping timeout: 480 seconds)
[11:49] * arbrandes (~arbrandes@179.210.13.90) has joined #ceph
[11:49] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[11:54] * DV_ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[11:57] * shang (~ShangWu@125.227.48.180) has joined #ceph
[11:59] * shang (~ShangWu@125.227.48.180) Quit (Remote host closed the connection)
[12:03] * zhaochao (~zhaochao@111.161.77.231) Quit (Quit: ChatZilla 0.9.91.1 [Iceweasel 38.1.0/20150711212448])
[12:12] * kefu is now known as kefu|afk
[12:12] * kefu|afk (~kefu@114.86.210.64) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:12] * Guest1005 (~abradshaw@130.117.88.81) Quit (Ping timeout: 480 seconds)
[12:19] * shyu (~Shanzhi@119.254.120.66) Quit (Remote host closed the connection)
[12:25] * analbeard (~shw@5.153.255.226) Quit (Quit: Leaving.)
[12:26] * kefu (~kefu@114.86.210.64) has joined #ceph
[12:28] * yanzheng2 (~zhyan@182.139.21.196) Quit (Quit: This computer has gone to sleep)
[12:29] * adrian (~abradshaw@149.6.38.86) has joined #ceph
[12:30] * adrian is now known as Guest1017
[12:32] * mookins (~mookins@27-32-204-26.static.tpgi.com.au) has joined #ceph
[12:39] * brutuscat (~brutuscat@17.Red-83-47-123.dynamicIP.rima-tde.net) has joined #ceph
[12:40] * yanzheng (~zhyan@182.139.21.196) has joined #ceph
[12:41] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (Ping timeout: 480 seconds)
[12:41] * mookins (~mookins@27-32-204-26.static.tpgi.com.au) has left #ceph
[12:45] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[12:46] * kefu (~kefu@114.86.210.64) has joined #ceph
[12:47] * dgurtner (~dgurtner@178.197.231.188) Quit (Ping timeout: 480 seconds)
[12:49] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[12:50] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) has joined #ceph
[12:50] * ChanServ sets mode +o elder
[12:51] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[12:52] * ira (~ira@c-71-233-225-22.hsd1.ma.comcast.net) has joined #ceph
[12:57] * yghannam (~yghannam@0001f8aa.user.oftc.net) has joined #ceph
[13:02] * cok (~chk@2a02:2350:18:1010:61b6:68f2:587d:1231) Quit (Quit: Leaving.)
[13:03] * kefu (~kefu@114.86.210.64) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[13:04] * yanzheng (~zhyan@182.139.21.196) Quit (Quit: This computer has gone to sleep)
[13:05] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[13:08] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:08] * yanzheng (~zhyan@182.139.21.196) has joined #ceph
[13:08] * kefu (~kefu@114.86.210.64) has joined #ceph
[13:12] * enzob (~enzob@197.83.200.180) has joined #ceph
[13:12] * brutuscat (~brutuscat@17.Red-83-47-123.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[13:12] * dgurtner (~dgurtner@178.197.231.188) has joined #ceph
[13:13] * yanzheng (~zhyan@182.139.21.196) Quit ()
[13:15] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[13:15] * kefu (~kefu@114.86.210.64) has joined #ceph
[13:17] * muni (~muni@bzq-218-200-252.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[13:20] * nsoffer (~nsoffer@bzq-109-65-255-114.red.bezeqint.net) has joined #ceph
[13:23] * brutuscat (~brutuscat@17.Red-83-47-123.dynamicIP.rima-tde.net) has joined #ceph
[13:24] * yanzheng (~zhyan@182.139.21.196) has joined #ceph
[13:30] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[13:31] * kefu (~kefu@114.86.210.64) has joined #ceph
[13:33] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[13:36] * linjan (~linjan@93-81-96-39.broadband.corbina.ru) has joined #ceph
[13:37] * wicope (~wicope@0001fd8a.user.oftc.net) Quit (Remote host closed the connection)
[13:39] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Quit: doppelgrau)
[13:45] * ganders (~root@190.2.42.21) has joined #ceph
[13:49] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[13:49] * overclk (~overclk@59.93.224.54) has joined #ceph
[13:51] * derjohn_mob (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[13:51] * kmARC (~kmARC@2001:620:20:16:cd84:5e2e:e3b8:5963) has joined #ceph
[13:55] * dgurtner (~dgurtner@178.197.231.188) Quit (Quit: leaving)
[13:56] * cok (~chk@nat-cph5-sys.net.one.com) has joined #ceph
[13:58] * cok (~chk@nat-cph5-sys.net.one.com) has left #ceph
[13:59] * enzob is now known as Enzo
[14:00] * derjohn_mob (~aj@94.119.1.52) has joined #ceph
[14:00] * Enzo is now known as Guest1024
[14:00] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[14:01] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[14:01] * kefu (~kefu@114.86.210.64) has joined #ceph
[14:03] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[14:06] * Guest1024 is now known as enzob
[14:06] * pdrakeweb (~pdrakeweb@oh-71-50-39-25.dhcp.embarqhsd.net) Quit (Quit: Leaving...)
[14:08] * pdrakeweb (~pdrakeweb@cpe-65-185-74-239.neo.res.rr.com) has joined #ceph
[14:10] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[14:11] * kefu (~kefu@114.86.210.64) has joined #ceph
[14:12] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[14:14] * cok (~chk@nat-cph5-sys.net.one.com) has joined #ceph
[14:19] * kefu (~kefu@114.86.210.64) Quit (Ping timeout: 480 seconds)
[14:21] * rwheeler (~rwheeler@pool-173-48-214-9.bstnma.fios.verizon.net) Quit (Quit: Leaving)
[14:22] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[14:23] * kefu (~kefu@120.204.164.217) has joined #ceph
[14:27] * brutuscat (~brutuscat@17.Red-83-47-123.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[14:27] * enzob (~enzob@197.83.200.180) Quit (Quit: Leaving)
[14:29] * kefu_ (~kefu@114.86.210.64) has joined #ceph
[14:30] * enzob (~enzob@197.83.200.180) has joined #ceph
[14:31] * brutuscat (~brutuscat@17.Red-83-47-123.dynamicIP.rima-tde.net) has joined #ceph
[14:32] * danieagle (~Daniel@187.101.45.207) has joined #ceph
[14:33] * hkraal (~hkraal@vps05.rootdomains.eu) has joined #ceph
[14:33] * linjan (~linjan@93-81-96-39.broadband.corbina.ru) Quit (Ping timeout: 480 seconds)
[14:36] * kefu (~kefu@120.204.164.217) Quit (Ping timeout: 480 seconds)
[14:38] * smerz (~ircircirc@37.74.194.90) has joined #ceph
[14:38] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[14:39] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[14:41] * cok (~chk@nat-cph5-sys.net.one.com) has left #ceph
[14:43] * RMar04 (~RMar04@5.153.255.226) has joined #ceph
[14:43] * enzob (~enzob@197.83.200.180) Quit (Quit: Leaving)
[14:44] <RMar04> Afternoon! I have a quick question regarding ceph RBD volumes. How is data disposal taken care of ? What happens when a volume is removed. Is the data that was occupying this space securely removed in any way?
[14:44] * nicatronTg (~Scaevolus@freedom.ip-eend.nl) has joined #ceph
[14:47] * analbeard (~shw@5.153.255.226) has joined #ceph
[14:48] <jcsp> RMar04: when you do "rbd rm", that is deleting the objects that made up the volume. They are about as deleted as a file on a typical local filesystem would be (i.e. potentially forensically recoverable if someone pulls an OSD drive)
[14:49] <RMar04> Sure, If you wanted to be sure, I take it you would run some type of data destruction on the rbd volume before the rbd rm?
[14:50] * ketor (~ketor@li218-88.members.linode.com) Quit (Remote host closed the connection)
[14:51] <[arx]> wouldn't you have to pull every drive that makes up the volume?
[14:52] <jcsp> no. For storage in general (not just ceph), overriting data from the applicatoin layer does not guarantee destruction. Consider SSDs remapping sectors, filesysteems that might write a fresh block instead of overwriting an exinsing one
[14:52] <jcsp> *existing
[14:52] <jcsp> in the most general case, there is only one secure way to delete, and that is to have originally written you data encrypted, and then subsequently get rid of the key
[14:53] <jcsp> also, consider the case where you had written the data, and an OSD was offline, and you deleted: until that OSD came back online and caught up, it would still hold a copy of your data
[14:54] <RMar04> Yes that is true. If for instance though, you created a volume, added data, removed that volume, and then a new volume was created made up objects, of which a couple sat on the same blocks of an OSD as the previous. Would it be in any way possible to retrieve data from those blocks through that new object?
[14:55] <RMar04> if said data was never encrypted
[14:55] <jcsp> from your RBD clients, no.
[14:55] <hkraal> jcsp: make good use of fault domains so you know where the data phisically sits so you can destruct it?
[14:55] * capri (~capri@212.218.127.222) has joined #ceph
[14:56] <jcsp> hkraal: no, because you can't even trust your physical devices to really forget data when you write zeros to a given block
[14:56] * hkraal just met ceph like 60 minutes ago but that seems the way to go from my POI
[14:56] <jcsp> (i'm speaking in the general case, secure with a capital S point of view)
[14:56] <hkraal> jcsp: you need to know were the data is to be able to destroy it. How you destroy it is part 2 and needs to be phisical?
[14:56] <jcsp> RMar04: if you're just worried about other RBD users, then there's nothing to worry about. Your use of "securely removed" kind of set me off :-)
[14:58] <RMar04> In this instance, the RBD user is 'cinder' from the openstack project, that could have many different 'clients' accessing through the same Ceph auth, so is that still a concern?
[14:59] * jdillaman (~jdillaman@108.18.97.82) has joined #ceph
[15:00] * sage (~quassel@2607:f298:6050:709d:1dca:9138:6630:167a) Quit (Remote host closed the connection)
[15:01] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[15:01] * sage (~quassel@2607:f298:6050:709d:1d6d:1cad:b22e:f8e4) has joined #ceph
[15:01] * ChanServ sets mode +o sage
[15:01] <jcsp> RMar04: once an rbd image is deleted (i.e. rbd rm returns), no clients (including the one that created it) can get at that data any more. Isolating different clients while the image still exists is a different question
[15:01] <RMar04> OK, perfect, thank you very much for your information!
[15:01] <jcsp> ...and not one that I know the details of off the top of my head. You can definitely use OSD auth caps to restrict different rbd clients to particular object prefixes.
[15:02] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Remote host closed the connection)
[15:05] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:08] * overclk (~overclk@59.93.224.54) Quit (Remote host closed the connection)
[15:08] * overclk (~overclk@59.93.224.54) has joined #ceph
[15:08] * nardial (~ls@dslb-088-072-089-026.088.072.pools.vodafone-ip.de) has joined #ceph
[15:08] * kefu_ (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[15:08] * gucki (~smuxi@office.netskin.com) Quit (Remote host closed the connection)
[15:08] * gucki (~smuxi@office.netskin.com) has joined #ceph
[15:09] * kefu (~kefu@114.86.210.64) has joined #ceph
[15:09] * sjm (~sjm@pool-100-1-115-73.nwrknj.fios.verizon.net) has joined #ceph
[15:10] * analbeard (~shw@5.153.255.226) Quit (Quit: Leaving.)
[15:11] * analbeard (~shw@5.153.255.226) has joined #ceph
[15:11] * analbeard (~shw@5.153.255.226) Quit ()
[15:12] * analbeard (~shw@5.153.255.226) has joined #ceph
[15:14] * nicatronTg (~Scaevolus@5NZAAFD60.tor-irc.dnsbl.oftc.net) Quit ()
[15:15] * yanzheng (~zhyan@182.139.21.196) Quit (Quit: This computer has gone to sleep)
[15:15] * squizzi (~squizzi@nat-pool-rdu-t.redhat.com) Quit (Remote host closed the connection)
[15:16] * tupper (~tcole@173.38.117.84) has joined #ceph
[15:16] * georgem1 (~gmihaiesc@206.108.127.16) has joined #ceph
[15:16] * yanzheng (~zhyan@182.139.21.196) has joined #ceph
[15:18] * sean (~seapasull@95.85.33.150) has joined #ceph
[15:18] * gucki (~smuxi@office.netskin.com) Quit (Ping timeout: 480 seconds)
[15:19] * gucki (~smuxi@mx01.lifecodexx.com) has joined #ceph
[15:20] * linjan (~linjan@93-81-96-39.broadband.corbina.ru) has joined #ceph
[15:21] * terje (~root@135.109.216.239) has joined #ceph
[15:21] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[15:22] * kefu (~kefu@114.86.210.64) has joined #ceph
[15:23] * overclk (~overclk@59.93.224.54) Quit (Remote host closed the connection)
[15:24] * overclk (~overclk@59.93.224.54) has joined #ceph
[15:26] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[15:27] * kefu (~kefu@114.86.210.64) has joined #ceph
[15:27] * shylesh__ (~shylesh@121.244.87.124) has joined #ceph
[15:29] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[15:29] * zenpac (~zenpac3@66.55.33.66) Quit (Ping timeout: 480 seconds)
[15:30] * kefu (~kefu@114.86.210.64) has joined #ceph
[15:30] * zenpac (~zenpac3@66.55.33.66) has joined #ceph
[15:31] * gucki (~smuxi@mx01.lifecodexx.com) Quit (Ping timeout: 480 seconds)
[15:32] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[15:32] * gucki (~smuxi@office.netskin.com) has joined #ceph
[15:33] * brutusca_ (~brutuscat@17.Red-83-47-123.dynamicIP.rima-tde.net) has joined #ceph
[15:33] * brutuscat (~brutuscat@17.Red-83-47-123.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[15:34] * linjan (~linjan@93-81-96-39.broadband.corbina.ru) Quit (Ping timeout: 480 seconds)
[15:34] * yanzheng (~zhyan@182.139.21.196) Quit (Quit: This computer has gone to sleep)
[15:35] * kefu (~kefu@114.86.210.64) has joined #ceph
[15:37] * sjm (~sjm@pool-100-1-115-73.nwrknj.fios.verizon.net) Quit (Read error: No route to host)
[15:39] * sjm (~sjm@pool-100-1-115-73.nwrknj.fios.verizon.net) has joined #ceph
[15:39] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[15:39] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[15:39] * overclk (~overclk@59.93.224.54) Quit (Remote host closed the connection)
[15:39] * kefu (~kefu@114.86.210.64) has joined #ceph
[15:41] * scuttle|afk is now known as scuttlemonkey
[15:43] * terje (~root@135.109.216.239) has joined #ceph
[15:43] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[15:46] * RMar04 (~RMar04@5.153.255.226) Quit (Quit: Leaving.)
[15:46] * nardial (~ls@dslb-088-072-089-026.088.072.pools.vodafone-ip.de) Quit (Quit: Leaving)
[15:47] * Concubidated (~Adium@66-87-144-189.pools.spcsdns.net) Quit (Quit: Leaving.)
[15:49] * analbeard (~shw@5.153.255.226) Quit (Quit: Leaving.)
[15:49] * RMar04 (~RMar04@5.153.255.226) has joined #ceph
[15:49] * overclk (~overclk@59.93.224.54) has joined #ceph
[15:50] * terje_ (~root@135.109.216.239) has joined #ceph
[15:51] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[15:52] * dgurtner (~dgurtner@178.197.231.188) has joined #ceph
[15:52] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[15:56] * sankarshan (~sankarsha@106.206.151.145) has joined #ceph
[15:57] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[15:57] * RMar04 (~RMar04@5.153.255.226) Quit (Ping timeout: 480 seconds)
[15:57] * overclk (~overclk@59.93.224.54) Quit (Remote host closed the connection)
[15:57] * kefu (~kefu@114.86.210.64) has joined #ceph
[16:00] * overclk (~overclk@59.93.224.54) has joined #ceph
[16:02] * kefu_ (~kefu@120.204.164.217) has joined #ceph
[16:02] * analbeard (~shw@5.153.255.226) has joined #ceph
[16:03] * jvilla (~juvilla@157.166.165.129) has joined #ceph
[16:04] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[16:04] * kefu__ (~kefu@114.86.210.64) has joined #ceph
[16:05] * kefu (~kefu@114.86.210.64) Quit (Read error: Connection reset by peer)
[16:05] * Concubidated (~Adium@66-87-144-189.pools.spcsdns.net) has joined #ceph
[16:06] * RMar04 (~RMar04@5.153.255.226) has joined #ceph
[16:06] <nils_> hmm my OSD processes sometimes seem to sometimes die with failed assertion: 2015-07-21 22:29:18.095006 7f2960f18700 -1 os/FileStore.cc: In function 'virtual int FileStore::read(coll_t, const ghobject_t&, uint64_t, size_t, ceph::bufferlist&, uint32_t, bool)' thread 7f2960f18700 time 2015-07-21 22:29:17.842542
[16:06] <nils_> os/FileStore.cc: 2850: FAILED assert(allow_eio || !m_filestore_fail_eio || got != -5)
[16:08] * yanzheng (~zhyan@182.139.21.196) has joined #ceph
[16:10] * kefu__ (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[16:10] * arbrandes (~arbrandes@179.210.13.90) Quit (Remote host closed the connection)
[16:11] * terje (~root@135.109.216.239) has joined #ceph
[16:11] * kefu (~kefu@114.86.210.64) has joined #ceph
[16:11] <Be-El> nils_: broken hard disk?
[16:11] * analbeard (~shw@5.153.255.226) Quit (Remote host closed the connection)
[16:11] * Docta (~Docta@pool-173-52-55-235.nycmny.fios.verizon.net) Quit ()
[16:12] * kefu_ (~kefu@120.204.164.217) Quit (Ping timeout: 480 seconds)
[16:12] <nils_> Be-El: hmm I would probably see different errors then, but I'll check it just to be safe.
[16:13] * terje_ (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[16:13] * arbrandes (~arbrandes@179.210.13.90) has joined #ceph
[16:16] <nils_> looks fine
[16:17] * georgem1 (~gmihaiesc@206.108.127.16) Quit (Quit: georgem1)
[16:18] * linjan (~linjan@93-81-96-39.broadband.corbina.ru) has joined #ceph
[16:18] * terje_ (~root@135.109.216.239) has joined #ceph
[16:19] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[16:20] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[16:20] * kefu (~kefu@114.86.210.64) has joined #ceph
[16:22] * Kalado (~dotblank@tor-exit.squirrel.theremailer.net) has joined #ceph
[16:23] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[16:24] * terje (~root@135.109.216.239) has joined #ceph
[16:25] * kefu (~kefu@114.86.210.64) has joined #ceph
[16:25] * haomaiwang (~haomaiwan@li1067-145.members.linode.com) has joined #ceph
[16:26] * overclk (~overclk@59.93.224.54) Quit (Remote host closed the connection)
[16:27] * wushudoin_ (~wushudoin@38.140.108.2) has joined #ceph
[16:28] * terje_ (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[16:29] * linjan (~linjan@93-81-96-39.broadband.corbina.ru) Quit (Ping timeout: 480 seconds)
[16:32] * terje_ (~root@135.109.216.239) has joined #ceph
[16:33] * wushudoin (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[16:34] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[16:34] * amatus (amatus@leon.g-cipher.net) has left #ceph
[16:35] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[16:35] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[16:35] * davidz (~davidz@2605:e000:1313:8003:91a2:29da:6b0b:6902) Quit (Read error: Connection reset by peer)
[16:36] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[16:37] * kefu (~kefu@114.86.210.64) has joined #ceph
[16:37] * overclk (~overclk@59.93.224.54) has joined #ceph
[16:38] * davidz (~davidz@cpe-23-242-27-128.socal.res.rr.com) has joined #ceph
[16:39] * terje (~root@135.109.216.239) has joined #ceph
[16:39] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[16:39] * overclk (~overclk@59.93.224.54) Quit (Read error: Connection reset by peer)
[16:40] * wushudoin_ is now known as wushudoin
[16:40] * overclk (~overclk@59.93.224.54) has joined #ceph
[16:40] * kefu (~kefu@114.86.210.64) has joined #ceph
[16:40] * terje_ (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[16:41] * kefu_ (~kefu@120.204.164.217) has joined #ceph
[16:41] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[16:41] * yanzheng (~zhyan@182.139.21.196) Quit (Quit: This computer has gone to sleep)
[16:42] * linjan (~linjan@93-81-96-39.broadband.corbina.ru) has joined #ceph
[16:43] * jcsp (~jspray@summerhall-meraki1.fluency.net.uk) Quit (Remote host closed the connection)
[16:45] * kefu__ (~kefu@114.86.210.64) has joined #ceph
[16:46] * terje_ (~root@135.109.216.239) has joined #ceph
[16:47] * lpabon (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[16:48] * dgbaley27 (~matt@c-67-176-93-83.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[16:48] * kefu (~kefu@114.86.210.64) Quit (Ping timeout: 480 seconds)
[16:48] * overclk (~overclk@59.93.224.54) Quit (Ping timeout: 480 seconds)
[16:48] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[16:49] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[16:49] * bitserker (~toni@88.87.194.130) Quit (Ping timeout: 480 seconds)
[16:49] * muni (~muni@bzq-218-200-252.red.bezeqint.net) has joined #ceph
[16:51] * yanzheng (~zhyan@182.139.21.196) has joined #ceph
[16:51] * kefu__ (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[16:52] * kefu_ (~kefu@120.204.164.217) Quit (Ping timeout: 480 seconds)
[16:52] * Kalado (~dotblank@7R2AACXR1.tor-irc.dnsbl.oftc.net) Quit ()
[16:52] * kefu (~kefu@114.86.210.64) has joined #ceph
[16:52] * terje (~root@135.109.216.239) has joined #ceph
[16:54] * RMar04 (~RMar04@5.153.255.226) Quit (Quit: Leaving.)
[16:54] * terje_ (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[16:56] * Concubidated (~Adium@66-87-144-189.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[16:56] * Concubidated (~Adium@161.225.196.30) has joined #ceph
[16:57] * calvinx (~calvin@101.100.172.246) has joined #ceph
[16:58] * bitserker (~toni@88.87.194.130) has joined #ceph
[16:59] * RMar04 (~RMar04@5.153.255.226) has joined #ceph
[16:59] * RMar04 (~RMar04@5.153.255.226) Quit ()
[17:00] * haomaiwang (~haomaiwan@li1067-145.members.linode.com) Quit (Remote host closed the connection)
[17:00] * terje (~root@135.109.216.239) Quit (Ping timeout: 480 seconds)
[17:01] * haomaiwang (~haomaiwan@li1067-145.members.linode.com) has joined #ceph
[17:01] * kefu_ (~kefu@120.204.164.217) has joined #ceph
[17:02] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) Quit (Read error: No route to host)
[17:04] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) has joined #ceph
[17:04] * terje (~root@135.109.216.239) has joined #ceph
[17:05] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[17:05] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Remote host closed the connection)
[17:05] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[17:06] * mgolub (~Mikolaj@91.225.201.153) has joined #ceph
[17:07] * georgem1 (~gmihaiesc@206.108.127.16) has joined #ceph
[17:09] * kefu (~kefu@114.86.210.64) Quit (Ping timeout: 480 seconds)
[17:09] * RMar04 (~RMar04@5.153.255.226) has joined #ceph
[17:10] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[17:10] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[17:11] * oro (~oro@2001:620:20:16:cd84:5e2e:e3b8:5963) Quit (Remote host closed the connection)
[17:11] * kmARC (~kmARC@2001:620:20:16:cd84:5e2e:e3b8:5963) Quit (Remote host closed the connection)
[17:11] * mhack (~mhack@nat-pool-bos-t.redhat.com) Quit (Remote host closed the connection)
[17:13] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[17:14] * shylesh__ (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[17:14] * shohn (~shohn@164.139.7.150) Quit (Quit: Leaving.)
[17:16] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[17:16] * bitserker1 (~toni@88.87.194.130) has joined #ceph
[17:16] * bitserker (~toni@88.87.194.130) Quit (Read error: Connection reset by peer)
[17:18] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[17:18] * haomaiwang (~haomaiwan@li1067-145.members.linode.com) Quit (Remote host closed the connection)
[17:18] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[17:19] * haomaiwang (~haomaiwan@li1072-91.members.linode.com) has joined #ceph
[17:19] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[17:19] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[17:21] * RMar04 (~RMar04@5.153.255.226) Quit (Quit: Leaving.)
[17:21] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) Quit (Quit: Leaving.)
[17:21] * kefu (~kefu@114.86.210.64) has joined #ceph
[17:24] * jdillaman (~jdillaman@108.18.97.82) Quit (Quit: jdillaman)
[17:24] * RMar04 (~RMar04@5.153.255.226) has joined #ceph
[17:25] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:25] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[17:25] * rendar (~I@host115-196-dynamic.181-80-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[17:26] * adeel (~adeel@2602:ffc1:1:face:e90b:c12d:9e2d:6bcc) Quit (Quit: Leaving...)
[17:26] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[17:26] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[17:27] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:27] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[17:28] * kefu_ (~kefu@120.204.164.217) Quit (Ping timeout: 480 seconds)
[17:31] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[17:32] * rendar (~I@host147-5-dynamic.55-79-r.retail.telecomitalia.it) has joined #ceph
[17:32] * kefu (~kefu@114.86.210.64) Quit (Max SendQ exceeded)
[17:33] * kefu (~kefu@114.86.210.64) has joined #ceph
[17:35] * RMar04 (~RMar04@5.153.255.226) Quit (Quit: Leaving.)
[17:35] * zaitcev (~zaitcev@c-76-113-49-212.hsd1.nm.comcast.net) has joined #ceph
[17:37] * tuhnis (~TheDoudou@tor-exit-node.7by7.de) has joined #ceph
[17:40] * thomnico (~thomnico@2a01:e35:8b41:120:c535:66ed:1a0c:9fd) has joined #ceph
[17:40] * overclk (~overclk@59.93.224.54) has joined #ceph
[17:47] * jnq (~jnq@0001b7cc.user.oftc.net) Quit (Ping timeout: 480 seconds)
[17:47] * superbeer (~MSX@studiovideo.org) Quit (Ping timeout: 480 seconds)
[17:48] * superbeer (~MSX@studiovideo.org) has joined #ceph
[17:48] * florz (nobody@2001:1a50:503c::2) Quit (Read error: No route to host)
[17:49] * overclk (~overclk@59.93.224.54) Quit (Ping timeout: 480 seconds)
[17:49] * yguang11 (~yguang11@2001:4998:effd:600:d5e1:b385:729e:daa8) has joined #ceph
[17:49] * brutusca_ (~brutuscat@17.Red-83-47-123.dynamicIP.rima-tde.net) Quit (Read error: Connection reset by peer)
[17:50] * florz (nobody@2001:1a50:503c::2) has joined #ceph
[17:50] * jnq (~jnq@95.85.22.50) has joined #ceph
[17:52] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[17:52] * bitserker (~toni@88.87.194.130) has joined #ceph
[17:52] * sleinen1 (~Adium@2001:620:0:69::104) Quit (Ping timeout: 480 seconds)
[17:52] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:52] * RMar04 (~RMar04@5.153.255.226) has joined #ceph
[17:53] * bitserker1 (~toni@88.87.194.130) Quit (Ping timeout: 480 seconds)
[17:53] * RMar04 (~RMar04@5.153.255.226) Quit ()
[17:54] * capri (~capri@212.218.127.222) has joined #ceph
[17:57] * yanzheng (~zhyan@182.139.21.196) Quit (Quit: This computer has gone to sleep)
[18:00] * Guest1017 (~abradshaw@149.6.38.86) Quit (Ping timeout: 480 seconds)
[18:01] * shohn (~Adium@p57AFCC39.dip0.t-ipconnect.de) has joined #ceph
[18:02] * muni (~muni@bzq-218-200-252.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[18:03] * brutuscat (~brutuscat@17.Red-83-47-123.dynamicIP.rima-tde.net) has joined #ceph
[18:04] * shohn1 (~Adium@p57AFCC39.dip0.t-ipconnect.de) has joined #ceph
[18:04] * shohn (~Adium@p57AFCC39.dip0.t-ipconnect.de) Quit (Read error: Connection reset by peer)
[18:07] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:07] * tuhnis (~TheDoudou@7R2AACXV8.tor-irc.dnsbl.oftc.net) Quit ()
[18:07] * root (~root@ec2-52-3-113-159.compute-1.amazonaws.com) has joined #ceph
[18:08] * kmARC (~kmARC@84-73-73-158.dclient.hispeed.ch) has joined #ceph
[18:08] <root> Hello Cephers
[18:09] * linjan (~linjan@93-81-96-39.broadband.corbina.ru) Quit (Ping timeout: 480 seconds)
[18:10] * jvilla (~juvilla@157.166.165.129) Quit (Quit: jvilla)
[18:12] <root> ident Abdi
[18:13] <root> exit
[18:14] * nardial (~ls@dslb-088-072-089-026.088.072.pools.vodafone-ip.de) has joined #ceph
[18:14] * gucki (~smuxi@office.netskin.com) Quit (Ping timeout: 480 seconds)
[18:15] * root (~root@ec2-52-3-113-159.compute-1.amazonaws.com) Quit (Quit: leaving)
[18:17] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[18:17] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[18:20] * Abdi (~root@ec2-52-3-113-159.compute-1.amazonaws.com) has joined #ceph
[18:20] * xarses_ (~xarses@12.164.168.117) has joined #ceph
[18:21] * georgem1 (~gmihaiesc@206.108.127.16) has left #ceph
[18:21] * georgem (~Adium@206.108.127.16) has joined #ceph
[18:21] * thomnico_ (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) has joined #ceph
[18:21] <Abdi> whois
[18:22] * thomnico (~thomnico@2a01:e35:8b41:120:c535:66ed:1a0c:9fd) Quit (Ping timeout: 480 seconds)
[18:23] <Abdi> Hello Cephers!!!
[18:23] <Abdi> I'm trying to integrate an OpenStack cloud with a Ceph Cluster
[18:23] <Abdi> After configuring glance with the correct parameters I'm getting error while trying to upload an image
[18:24] <Abdi> here is the error I'm getting: PermissionError: error creating image
[18:24] * reinhardt1053 (~reinhardt@195.46.234.215) Quit (Remote host closed the connection)
[18:24] <Abdi> Anyone has an idea as to why I have this permisison error?
[18:25] * reinhardt1053 (~reinhardt@195.46.234.215) has joined #ceph
[18:26] * reinhard_ (~reinhardt@195.46.234.215) has joined #ceph
[18:26] * reinhardt1053 (~reinhardt@195.46.234.215) Quit (Read error: Connection reset by peer)
[18:27] * kefu is now known as kefu|afk
[18:27] * reinhard_ (~reinhardt@195.46.234.215) Quit (Remote host closed the connection)
[18:28] * branto (~branto@ip-213-220-232-132.net.upcbroadband.cz) Quit (Quit: Leaving.)
[18:30] * jordanP (~jordan@scality-jouf-2-194.fib.nerim.net) Quit (Quit: Leaving)
[18:30] * sankarshan (~sankarsha@106.206.151.145) Quit (Quit: Leaving...)
[18:30] * arcimboldo (~antonio@dhcp-y11-zi-s3it-130-60-34-042.uzh.ch) has joined #ceph
[18:30] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[18:31] <georgem> Abdi: i would enable debug and verbose in cinder.conf, restart the cinder services and try again ???so you get a better error log
[18:32] * kefu|afk (~kefu@114.86.210.64) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:33] <Abdi> Hi georgem thanks for the reply... I can enable debug and verbos in glance.api and registry but...
[18:33] * bitserker (~toni@88.87.194.130) Quit (Ping timeout: 480 seconds)
[18:33] * moore (~moore@64.202.160.88) has joined #ceph
[18:33] <Abdi> I even tried to upload the image directly into the pool via "rbd" command
[18:33] * shohn1 (~Adium@p57AFCC39.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[18:34] <georgem> Abdi: and?
[18:34] <Abdi> and I've got the similar error... I'll past the exact message in a sec.
[18:34] <Abdi> rbd: import failed: (1) Operation not permitted
[18:35] <Abdi> Yes... I check the keyring and user exist in the cluster and that's the troubling part... all looks good
[18:35] <m0zes> sounds like cephx keys aren't configured correctly.
[18:35] * TheSov (~TheSov@204.13.200.248) has joined #ceph
[18:36] * jwilkins (~jwilkins@2601:644:4100:bfef:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[18:37] <Abdi> hey m0zes how do I make sure that keyring are configured correctly
[18:37] <Abdi> ??
[18:37] <Abdi> beside doing a "ceph auth list" and making sure that the key is in the /etc/ceph/ folder of the glance api server
[18:38] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:38] <m0zes> are the permissions of the keyrings correct? can the glance/cinder users read their own keys?
[18:39] * reed (~reed@67.23.204.130) has joined #ceph
[18:40] <Abdi> glance has the correct OS permission on the keyring
[18:40] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:41] <Abdi> yes each user have their keyring
[18:41] * overclk (~overclk@59.93.224.54) has joined #ceph
[18:42] <Anticimex> loicd: here? looking for pointer to how you're using https://github.com/stackforge/puppet-ceph/blob/master/manifests/conf.pp#L31
[18:42] <Abdi> root@controller1:/home/abdi# rbd -p glance-images -c /etc/ceph/rbdcluster.conf --id glance --keyring /etc/ceph/ceph.client.glance.keyring import cirros-0.3.1-x86_64-disk.img
[18:42] <Anticimex> i figure it's a decent way to get my ceph cluster config into hiera; i'm declaring ceph:conf with args => hiera('myhashthingy')
[18:42] <Abdi> rbd: image creation failed
[18:43] <Abdi> Importing image: 0% complete...failed.
[18:43] <Abdi> 2015-07-22 12:34:40.533696 7f3392057840 -1 librbd: Could not tell if cirros-0.3.1-x86_64-disk.img already exists
[18:43] <Abdi> rbd: import failed: (1) Operation not permitted
[18:43] <m0zes> Abdi: I would guess the next steps would be to see what the exact errors you are getting out of rbd (and possibly increase debugging out of ceph). sudo -u glance rbd -p images ls
[18:43] <TheSov> is there any reason anyone can think of, that ceph would not be able to replace a commercial iscsi san in a completely linux environment
[18:44] <m0zes> there might be an issue with host that are booting via iscsi.
[18:45] * thomnico_ (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[18:45] <m0zes> of course, I can't think of any *serious* problems.
[18:47] * nardial (~ls@dslb-088-072-089-026.088.072.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[18:47] <TheSov> m0zes, ignoring boot, no other issues?
[18:49] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[18:49] <m0zes> there could always be issues. do you have to support old clients? iscsi has existed for a *long* time. a ceph client for linux 2.6.18 could be painful...
[18:49] * overclk (~overclk@59.93.224.54) Quit (Ping timeout: 480 seconds)
[18:49] * derjohn_mob (~aj@94.119.1.52) Quit (Ping timeout: 480 seconds)
[18:49] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[18:49] <m0zes> are there any likely ones that I can think of? no.
[18:50] <TheSov> hmmmmm
[18:52] <Abdi> Hi mozes... I'm going to enable debug and verbos on both side: on the ceph cluster and on the glance-api servers and will report in few minutes
[18:53] * nardial (~ls@dslb-088-072-089-026.088.072.pools.vodafone-ip.de) has joined #ceph
[18:55] <Abdi> m0zes: when I issue the following command: sudo -u glance rbd -p glance-images -c /etc/ceph/rbdcluster.conf --keyring /etc/ceph/ceph.client.glance.keyring ls
[18:55] <Abdi> I'm getting this error: 2015-07-22 12:55:08.564566 7fc78f752840 0 librados: client.admin authentication error (1) Operation not permitted
[18:56] <Abdi> rbd: couldn't connect to the cluster!
[18:56] <Abdi> why the client.admin keyring is been called here? I'm bit confused when I specified the glance key
[18:56] * jclm1 (~jclm@ip24-253-98-109.lv.lv.cox.net) has joined #ceph
[18:57] <m0zes> rbdcluster.conf. shouldn't that be /etc/ceph/ceph.conf? did you tell glance and cinder that ceph.conf doesn't exist and to check rbdcluster.conf?
[18:57] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:57] <m0zes> Abdi: you specified the glance key, not the glance id.
[18:57] <m0zes> you would need to specify both.
[18:57] <m0zes> or just the id.
[18:57] <Abdi> rbdcluster.conf is my ceph configuraiton file and yes here is my glance-api.conf snippet...
[18:58] <Abdi> okay... holdon...
[18:58] * ircolle (~Adium@2601:285:201:2bf9:1cae:844d:bff8:d8ad) has joined #ceph
[18:58] <Abdi> here is the output erro: root@controller1:/home/abdi# sudo -u glance rbd -p glance-images -c /etc/ceph/rbdcluster.conf --id glance --keyring /etc/ceph/ceph.client.glance.keyring ls
[18:58] <Abdi> rbd: list: (1) Operation not permitted
[18:58] * jclm2 (~jclm@ip24-253-98-109.lv.lv.cox.net) has joined #ceph
[18:59] <m0zes> there you go. it is an issue with cephx perms.
[18:59] * jclm (~jclm@ip24-253-98-109.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[19:00] <m0zes> you'll need to 'ceph auth list' and see what the perms are, and fix them.
[19:00] * overclk (~overclk@59.93.224.54) has joined #ceph
[19:00] <m0zes> anyway, I've got a meeting.
[19:03] <Abdi> here is the permission for glance user:
[19:03] <Abdi> ceph-admin@mgmt:~/rbdcluster$ ceph --cluster rbdcluster auth get client.glance
[19:03] <Abdi> exported keyring for client.glance
[19:03] <Abdi> [client.glance]
[19:03] <Abdi> key = AQDfb65VTo66BBAAe405rs4ejdLS3Vy2Yzkowg==
[19:03] <Abdi> caps mon = "allow r"
[19:03] <Abdi> caps osd = "allow class-read object_prefix rbd_children, allow rwx pool= glance-images"
[19:03] <Abdi> ceph-admin@mgmt:~/rbdcluster$
[19:03] * snakamoto (~Adium@192.16.26.2) has joined #ceph
[19:04] <Abdi> sorry... thanks for your inputs
[19:04] * snakamoto1 (~Adium@192.16.26.2) has joined #ceph
[19:04] * jclm1 (~jclm@ip24-253-98-109.lv.lv.cox.net) Quit (Ping timeout: 480 seconds)
[19:05] * kmARC (~kmARC@84-73-73-158.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:06] * thomnico (~thomnico@2a01:e35:8b41:120:cc9f:d17c:c2bc:9a39) has joined #ceph
[19:06] <georgem> Abdi: are you running these commands from the same host? is your ceph-admin host also the glance server?
[19:06] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[19:07] <Abdi> hi georgem - I'm running the ceph auth on the ceph-admin node and the rbd on the controller where the glance-api service is running on
[19:07] <Abdi> so 2 different nodes: glance and ceph-admin
[19:07] <georgem> Abdi: make sure the controller node can reach the mon servers and the ceph-public
[19:08] <Abdi> I'll check it in couple of sec...
[19:08] * overclk (~overclk@59.93.224.54) Quit (Remote host closed the connection)
[19:09] <Abdi> yes... it can ping it...
[19:09] <Abdi> root@controller1:/home/abdi# ping ceph-node2
[19:09] <Abdi> PING ceph-node2 (49.27.142.52) 56(84) bytes of data.
[19:09] <Abdi> 64 bytes from ceph-node2 (49.27.142.52): icmp_seq=1 ttl=64 time=0.168 ms
[19:09] <Abdi> 64 bytes from ceph-node2 (49.27.142.52): icmp_seq=2 ttl=64 time=0.163 ms
[19:09] <Abdi> root@controller1:/home/abdi# ping ceph-node1
[19:09] <Abdi> PING ceph-node1 (49.27.142.51) 56(84) bytes of data.
[19:09] <Abdi> 64 bytes from ceph-node1 (49.27.142.51): icmp_seq=1 ttl=64 time=0.142 ms
[19:09] <Abdi> 64 bytes from ceph-node1 (49.27.142.51): icmp_seq=2 ttl=64 time=0.102 ms
[19:10] <Abdi> root@controller1:/home/abdi# ping ceph-node3
[19:10] <Abdi> PING ceph-node3 (49.27.142.53) 56(84) bytes of data.
[19:10] <Abdi> 64 bytes from ceph-node3 (49.27.142.53): icmp_seq=1 ttl=64 time=0.187 ms
[19:10] <Abdi> 64 bytes from ceph-node3 (49.27.142.53): icmp_seq=2 ttl=64 time=0.167 ms
[19:10] <Abdi> I have 3 mon in the cluster
[19:10] * overclk (~overclk@59.93.224.54) has joined #ceph
[19:10] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[19:11] <Abdi> I looks tho it's a permission (aka capabilities) but they shows that the correct permisisons are applied to the keyring for the user (glance)
[19:12] * snakamoto (~Adium@192.16.26.2) Quit (Ping timeout: 480 seconds)
[19:12] <Abdi> georgem: do you know how to re-create the keyring?
[19:13] <Abdi> I'm not sure why the Operation is not permitted when I see the keyring/user has the proper privilege
[19:16] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[19:16] * reed (~reed@67.23.204.130) Quit (Quit: Ex-Chat)
[19:16] * reed (~reed@67.23.204.130) has joined #ceph
[19:17] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:18] <georgem> Abdi:I don't think the problem is the keys..in my case if I try with "sudo -u glance rbd -p images -c /etc/ceph/ceph.conf --keyring /etc/ceph/ceph.client.glance.keyring ls" if fails with "Permission denied"
[19:18] * thomnico (~thomnico@2a01:e35:8b41:120:cc9f:d17c:c2bc:9a39) Quit (Ping timeout: 480 seconds)
[19:18] <georgem> Abdi: but "sudo -u glance ceph -n client.glance --keyring /etc/ceph/ceph.client.glance.keyring osd tree" works just fine
[19:19] * arcimboldo (~antonio@dhcp-y11-zi-s3it-130-60-34-042.uzh.ch) Quit (Quit: Ex-Chat)
[19:21] * overclk (~overclk@59.93.224.54) Quit (Quit: Leaving...)
[19:22] <Abdi> oh... let me try it
[19:25] <Abdi> root@controller1:/home/abdi# sudo -u glance ceph -n client.glance --keyring /etc/ceph/ceph.client.glance.keyring osd tree
[19:25] <Abdi> Traceback (most recent call last):
[19:25] <Abdi> File "/usr/bin/ceph", line 74, in <module>
[19:25] <Abdi> from ceph_argparse import \
[19:25] <Abdi> ImportError: No module named ceph_argparse
[19:25] <Abdi> I'm getting an error with cpeh command in my glance server... hence why I use the ceph-admin node
[19:26] <georgem> Abdi: then maybe you didn't install all the packages
[19:26] * reed (~reed@67.23.204.130) Quit (Ping timeout: 480 seconds)
[19:27] <Abdi> looks like that ceph packages is not install... this is the list of packages I've installed in Glance server
[19:27] <TheSov> ignoring monitors what is the cheapest osd system i can buy that is somewhat modular
[19:27] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[19:28] <Abdi> ceph-common python-ceph glance python-glanceclient
[19:28] <Abdi> georgem: Do you know what packages I need to install on the Glance server
[19:30] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[19:31] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) has joined #ceph
[19:31] * sleinen1 (~Adium@2001:620:0:69::101) has joined #ceph
[19:32] <Abdi> whois georgem
[19:32] <Abdi> georgem: Do you have a list of ceph packages that you've installed on your glance server?
[19:36] * brutuscat (~brutuscat@17.Red-83-47-123.dynamicIP.rima-tde.net) Quit (Remote host closed the connection)
[19:37] <georgem> Abdi: my controllers are also mon servers so they might have more packages, but for the ceph clients I believe this is what you need: eph
[19:37] <georgem> - ceph-common #|
[19:37] <georgem> - ceph-fs-common #|--> yes, they are already all dependencies from 'ceph'
[19:37] <georgem> - ceph-fuse #|--> however while proceding to rolling upgrades and the 'ceph' package upgrade
[19:37] <georgem> - ceph-mds #|--> they don't get update so we need to force them
[19:37] <georgem> - libcephfs1
[19:38] <georgem> Abdi: from https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-common/tasks/install_on_debian.yml
[19:38] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:38] <georgem> and now I have to go for lunch..
[19:39] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[19:45] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[19:45] * gucki (~smuxi@office.netskin.com) has joined #ceph
[19:49] * dgurtner (~dgurtner@178.197.231.188) Quit (Ping timeout: 480 seconds)
[19:57] <Abdi> georgem: sorry for holding you up... l'll post the result so you can see it after you lunch
[20:00] * stiopa (~stiopa@cpc73828-dals21-2-0-cust630.20-2.cable.virginm.net) has joined #ceph
[20:01] * Hemanth (~Hemanth@117.213.181.242) has joined #ceph
[20:02] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) Quit (Ping timeout: 480 seconds)
[20:05] * georgem (~Adium@206.108.127.16) has joined #ceph
[20:06] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[20:07] <Abdi> there is a bug in ceph-common in Hammer see for details: http://tracker.ceph.com/issues/11388
[20:14] <georgem> Abdi: did you install the ceph package as well?
[20:20] <Anticimex> loicd: solved it. format was { 'key': value => $value }
[20:20] <Abdi> georgem: no... I was going to but wasn't sure if I need it
[20:20] <Abdi> there is a bug in ceph-common in Hammer
[20:21] <georgem> Abdi: just install the package???.
[20:21] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[20:21] <Abdi> georgem: doing it now...
[20:21] * rotbeard (~redbeard@2a02:908:df10:d300:6267:20ff:feb7:c20) Quit (Quit: Leaving)
[20:23] * ilken (ilk@2602:63:c2a2:af00:7d11:e660:be90:dc9b) has joined #ceph
[20:23] <Abdi> georgem: after installed ceph package the ceph command works
[20:24] * nardial (~ls@dslb-088-072-089-026.088.072.pools.vodafone-ip.de) Quit (Remote host closed the connection)
[20:24] <Abdi> I mean by works it show the osd tree
[20:25] <Abdi> georgem: This command return the correct osd information: sudo -u glance ceph -n client.glance -c /etc/ceph/rbdcluster.conf --keyring /etc/ceph/ceph.client.glance.keyring osd tree
[20:26] <georgem> Abdi: then I would say your issue is not with the cephx, can you paste the trace errror from glance into pastebin ?
[20:27] <Abdi> georgem: let me trun glance services with verbos and debug and pastbin it in couple min.
[20:28] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[20:30] <m0zes> caps osd = "allow class-read object_prefix rbd_children, allow rwx pool= glance-images" looks like a typo
[20:30] <m0zes> there should not be a space between pool= and glance-images.
[20:30] * nsoffer (~nsoffer@bzq-109-65-255-114.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[20:31] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[20:31] <georgem> m0zes: good catch
[20:32] <m0zes> that would probably allow the glance user to check osd tree, but disallow it to read/write to the glance-images pool
[20:32] * dopesong (~dopesong@lb1.mailer.data.lt) has joined #ceph
[20:39] * dneary_ (~dneary@AGrenoble-651-1-600-61.w90-52.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[20:39] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[20:39] * Nacer (~Nacer@203-206-190-109.dsl.ovh.fr) has joined #ceph
[20:43] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[20:44] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[20:49] * wushudoin_ (~wushudoin@transit-86-181-132-209.redhat.com) has joined #ceph
[20:55] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[20:55] * sleinen1 (~Adium@2001:620:0:69::101) Quit (Read error: Connection reset by peer)
[20:56] * wushudoin (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[20:57] * sleinen1 (~Adium@2001:620:0:69::101) has joined #ceph
[20:57] * wushudoin_ (~wushudoin@transit-86-181-132-209.redhat.com) Quit (Ping timeout: 480 seconds)
[20:57] * wushudoin_ (~wushudoin@38.140.108.2) has joined #ceph
[20:58] <Abdi> m0zes: let me try you suggestion
[20:59] * kmARC (~kmARC@84-73-73-158.dclient.hispeed.ch) has joined #ceph
[21:01] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) has joined #ceph
[21:01] * shylesh (~shylesh@123.136.223.77) has joined #ceph
[21:02] * rlrevell1 (~leer@vbo1.inmotionhosting.com) has joined #ceph
[21:02] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[21:03] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:04] <cholcombe> to up the debugging on ceph process that all goes in the ceph.conf file right?
[21:05] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) Quit (Remote host closed the connection)
[21:09] * RMar04 (~RMar04@host86-156-50-85.range86-156.btcentralplus.com) has joined #ceph
[21:13] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) has joined #ceph
[21:14] * kmARC (~kmARC@84-73-73-158.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:15] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[21:15] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Read error: Connection reset by peer)
[21:18] <Abdi> m0zes: You Rock man! that was it... a typo!!!!
[21:18] <Abdi> I've updated the caps and now I can upload images into the cluster/pool
[21:25] <TheSov> is there a ceph roadmap out there?
[21:25] <TheSov> looked at the web page marked roadmap and it looks more like a bug fix page
[21:33] * Hemanth (~Hemanth@117.213.181.242) Quit (Ping timeout: 480 seconds)
[21:34] * shylesh (~shylesh@123.136.223.77) Quit (Remote host closed the connection)
[21:35] <m0zes> Abdi: file a bug. I cannot think of a reason that those caps should parse. I would hope it would do some sanity checking on the caps it is trying to set for a particular client.
[21:36] * dkrause (~dkrause@2607:fad0:32:a03:7db9:eed1:4a10:2f85) has joined #ceph
[21:38] <m0zes> if nothing else, a note about it in the bug tracker and/or documentation could be useful.
[21:43] * dopesong (~dopesong@lb1.mailer.data.lt) Quit (Remote host closed the connection)
[21:43] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:52] * kmARC (~kmARC@84-73-73-158.dclient.hispeed.ch) has joined #ceph
[21:54] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:01] * georgem (~Adium@206.108.127.16) has joined #ceph
[22:02] <Abdi> m0zes: Sounds good. I'll file a bug and put comments about my accidental misfortune
[22:03] <Abdi> m0zes: I do agree that pool namescape should have been parsed without any white spaces
[22:04] * dkrause (~dkrause@2607:fad0:32:a03:7db9:eed1:4a10:2f85) Quit (Quit: Leaving)
[22:04] <m0zes> I think it should have errored on that simply because there is no cap type "glance-images" the space shouldn't just silently get eaten
[22:06] <Abdi> Yes. correct
[22:07] * mgolub (~Mikolaj@91.225.201.153) Quit (Quit: away)
[22:09] * adrian15b (~kvirc@btactic.ddns.jazztel.es) Quit (Ping timeout: 480 seconds)
[22:12] <doppelgrau> out of curiosity, does anyone know why the osds keep the last 500 versions of the osdmap in memory?
[22:17] * RMar04 (~RMar04@host86-156-50-85.range86-156.btcentralplus.com) Quit (Quit: Leaving.)
[22:20] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:26] * Abdi (~root@ec2-52-3-113-159.compute-1.amazonaws.com) has left #ceph
[22:27] * linjan (~linjan@176.195.217.90) has joined #ceph
[22:28] * georgem (~Adium@206.108.127.16) has joined #ceph
[22:32] * snakamoto (~Adium@192.16.26.2) has joined #ceph
[22:32] * georgem (~Adium@206.108.127.16) has left #ceph
[22:39] * snakamoto1 (~Adium@192.16.26.2) Quit (Ping timeout: 480 seconds)
[22:47] <TheSov> doppelgrau, i dont know but the cern guy was complaining about that
[22:47] <TheSov> they changed it to 100 or something
[22:48] <doppelgrau> TheSov: 20
[22:48] <TheSov> 20 wow
[22:48] <TheSov> big difference
[22:48] <doppelgrau> TheSov: and I was thinking ???what are the advantages of such many generations if smaller values work too???
[22:49] <TheSov> maybe for purposes of locating data if the monitors are latent?
[22:51] * Miouge (~Miouge@91.178.33.59) Quit (Quit: Miouge)
[22:53] <cholcombe> anyone have a link to the ceph bobtail end of life?
[22:54] * ganders (~root@190.2.42.21) Quit (Quit: WeeChat 0.4.2)
[22:56] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[22:57] * tupper (~tcole@173.38.117.84) Quit (Ping timeout: 480 seconds)
[23:03] * joshd (~jdurgin@206.169.83.146) Quit (Ping timeout: 480 seconds)
[23:08] * rlrevell1 (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[23:18] * derjohn_mob (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[23:18] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:19] * joshd (~jdurgin@206.169.83.146) has joined #ceph
[23:20] * ilken (ilk@2602:63:c2a2:af00:7d11:e660:be90:dc9b) Quit (Quit: Keyboard Error)
[23:22] * Ceph-Log-Bot (~logstash@185.66.248.215) has joined #ceph
[23:22] * Ceph-Log-Bot (~logstash@185.66.248.215) Quit (Remote host closed the connection)
[23:23] <joshd> cholcombe: bobtail is so old it's not even on this page: http://ceph.com/docs/master/releases/ dumpling is also eol, as shown there
[23:24] <cholcombe> joshd: oh i know it's ancient :)
[23:25] <cholcombe> joshd: thanks. That's what I needed
[23:26] * ilken (ilk@2602:63:c2a2:af00:7d11:e660:be90:dc9b) has joined #ceph
[23:26] * sleinen1 (~Adium@2001:620:0:69::101) Quit (Ping timeout: 480 seconds)
[23:27] <TheSov> interesting https://www.linkedin.com/grp/post/6577799-5984198086061215744
[23:27] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) has joined #ceph
[23:27] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) Quit ()
[23:27] <TheSov> ceph for windows
[23:27] <TheSov> that would be very nice
[23:36] * xarses_ (~xarses@12.164.168.117) Quit (Ping timeout: 480 seconds)
[23:42] * nsoffer (~nsoffer@bzq-109-65-255-114.red.bezeqint.net) has joined #ceph
[23:46] <TheSov> doppelgrau, there has to be a reason to even cache them at all
[23:46] * i_m (~ivan.miro@83.149.35.166) Quit (Read error: Connection reset by peer)
[23:46] * i_m (~ivan.miro@83.149.37.176) has joined #ceph
[23:46] <TheSov> doppelgrau, find that reason and I imagine, the reason for having 500 will be clear
[23:48] * danieagle (~Daniel@187.101.45.207) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[23:51] <doppelgrau> TheSov: sure, but I had hopped someone did know the reason, curiosity is ATM not enough to invest a few hours of work
[23:52] * linjan (~linjan@176.195.217.90) Quit (Ping timeout: 480 seconds)
[23:52] <TheSov> i understand the reason to have 1 osd per disk. but i do not understand the reason why each osd refers to the cache, there should be another layer between system and osd
[23:52] <TheSov> if the disk fails, the osd segfaults no harm done to other osds
[23:53] <TheSov> but theres no reason to keep 500 per osd
[23:53] <TheSov> you can have 1 map daemon to do that
[23:54] <TheSov> mmmmmm nested ceph clusters :)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.