#ceph IRC Log

Index

IRC Log for 2016-02-16

Timestamps are in GMT/BST.

[10:54] -oxygen.oftc.net- *** Looking up your hostname...
[10:54] -oxygen.oftc.net- *** Checking Ident
[10:54] -oxygen.oftc.net- *** Found your hostname
[10:54] -oxygen.oftc.net- *** No Ident response
[10:54] * CephLogBot (~PircBot@rockbox.widodh.nl) has joined #ceph
[10:54] * Topic is 'http://ceph.com/get || dev channel #ceph-devel || test lab channel #sepia'
[10:54] * Set by scuttlemonkey!~scuttle@nat-pool-rdu-t.redhat.com on Mon Sep 07 00:44:10 CEST 2015
[10:54] * Sloo (~Sloo@194.249.247.164) has joined #ceph
[11:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[11:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[11:01] * dvanders (~dvanders@dvanders-pro.cern.ch) Quit (Remote host closed the connection)
[11:02] * rakeshgm (~rakesh@121.244.87.117) Quit (Ping timeout: 480 seconds)
[11:02] * sleinen (~Adium@86.12.129.15) has joined #ceph
[11:03] <sep> good morning;; ceph -s shows me IO and ops/sec for that instant. are there counters that shows totals somewhere as well ?
[11:04] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Read error: Connection reset by peer)
[11:04] * LeaChim (~LeaChim@host86-175-32-149.range86-175.btcentralplus.com) has joined #ceph
[11:05] * erwan_ (~erwan@46.231.131.179) has joined #ceph
[11:05] * rakeshgm (~rakesh@121.244.87.124) has joined #ceph
[11:05] * DLange (~DLange@dlange.user.oftc.net) Quit (Remote host closed the connection)
[11:05] <s3an2> sep, Maybe you want to look calamari
[11:06] <s3an2> It will graph the I/O over time for you - including per pool
[11:06] * sleinen1 (~Adium@2001:620:0:82::101) has joined #ceph
[11:06] * DLange (~DLange@dlange.user.oftc.net) has joined #ceph
[11:07] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) Quit (Quit: ZNC - http://znc.in)
[11:08] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) has joined #ceph
[11:10] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) Quit ()
[11:10] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) has joined #ceph
[11:10] * sleinen (~Adium@86.12.129.15) Quit (Ping timeout: 480 seconds)
[11:11] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) Quit ()
[11:11] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[11:11] * HappyLoaf (~HappyLoaf@cpc93928-bolt16-2-0-cust133.10-3.cable.virginm.net) has joined #ceph
[11:12] * MannerMan (~oscar@user170.217-10-117.netatonce.net) Quit (Ping timeout: 480 seconds)
[11:12] <sep> s3an2, no debian packages yet. but i will absolutly look at it. but i would also like to graph those values in our overall monitoring system,
[11:12] * TMM (~hp@185.5.122.2) Quit (Quit: Ex-Chat)
[11:13] * rakesh_ (~rakesh@121.244.87.124) has joined #ceph
[11:14] * quinoa (~quinoa@24-148-81-106.c3-0.grn-ubr2.chi-grn.il.cable.rcn.com) has joined #ceph
[11:14] * kefu is now known as kefu|afk
[11:18] * Mika_c (~quassel@122.146.93.152) Quit (Remote host closed the connection)
[11:20] * dux0r (~Crisco@76GAACFQX.tor-irc.dnsbl.oftc.net) Quit ()
[11:20] * Maza (~Szernex@tor-exit-2.netdive.xyz) has joined #ceph
[11:24] * MannerMan (~oscar@user170.217-10-117.netatonce.net) has joined #ceph
[11:27] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[11:30] * shylesh (~shylesh@121.244.87.118) has joined #ceph
[11:34] * Geph (~Geoffrey@41.77.153.99) Quit (Ping timeout: 480 seconds)
[11:35] * Geph (~Geoffrey@41.77.153.99) has joined #ceph
[11:36] * sleinen1 (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[11:37] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[11:38] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) Quit (Quit: leaving)
[11:41] * dosaboy (~dosaboy@65.93.189.91.lcy-01.canonistack.canonical.com) has joined #ceph
[11:42] * rotbeard (~redbeard@ppp-115-87-78-63.revip4.asianet.co.th) has joined #ceph
[11:43] * rakesh__ (~rakesh@121.244.87.117) has joined #ceph
[11:50] * hr (~hr@103.50.11.146) Quit (Ping timeout: 480 seconds)
[11:50] * rakesh_ (~rakesh@121.244.87.124) Quit (Ping timeout: 480 seconds)
[11:50] * rakeshgm (~rakesh@121.244.87.124) Quit (Ping timeout: 480 seconds)
[11:50] * Maza (~Szernex@7V7AACNT4.tor-irc.dnsbl.oftc.net) Quit ()
[11:50] * Ralth (~superdug@tor2e1.privacyfoundation.ch) has joined #ceph
[11:50] * chef_ (~oftc-webi@p790356f6.tokynt01.ap.so-net.ne.jp) has joined #ceph
[11:51] * peeejayz (~peeejayz@isis57193.sci.rl.ac.uk) Quit ()
[11:55] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[11:59] * naoto (~naotok@2401:bd00:b001:8920:27:131:11:254) Quit (Quit: Leaving...)
[12:01] * Concubidated (~Adium@c-50-173-245-118.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[12:01] * Concubidated (~Adium@c-50-173-245-118.hsd1.ca.comcast.net) has joined #ceph
[12:05] * wjw-freebsd (~wjw@ip-80-113-15-250.ip.prioritytelecom.net) has joined #ceph
[12:12] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[12:12] * lmb (~lmb@charybdis-ext.suse.de) Quit (Ping timeout: 480 seconds)
[12:12] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[12:12] * coreping (~Michael_G@n1.coreping.org) has left #ceph
[12:13] * erwan_ (~erwan@46.231.131.179) Quit (Ping timeout: 480 seconds)
[12:20] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Ping timeout: 480 seconds)
[12:20] * Ralth (~superdug@84ZAACKH6.tor-irc.dnsbl.oftc.net) Quit ()
[12:20] * Jones (~Malcovent@193.90.12.89) has joined #ceph
[12:23] * wjw-freebsd (~wjw@ip-80-113-15-250.ip.prioritytelecom.net) Quit (Ping timeout: 480 seconds)
[12:24] * mhuang (~mhuang@222.73.197.214) Quit (Quit: This computer has gone to sleep)
[12:24] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Ping timeout: 480 seconds)
[12:27] <boichev> sep are you sure the op/s and IOs are realy just for the local osds and not for the whole cluster ?
[12:27] * TMM (~hp@185.5.122.2) has joined #ceph
[12:28] <sep> boichev, they are for the whole cluster. but they are for the instant you look at
[12:28] <sep> i want the total number of ops. and total MB written or read. so i can graph them.
[12:28] <boichev> sep aa you want a graph over time :)
[12:29] <sep> exactly
[12:29] <boichev> sep I use zabbix for that to get all information regarding the cluster + alerts
[12:29] * The_Ball (~pi@75.80-203-114.nextgentel.com) has joined #ceph
[12:29] <boichev> sep https://github.com/thelan/ceph-zabbix
[12:31] * shylesh (~shylesh@121.244.87.118) Quit (Remote host closed the connection)
[12:31] * kanagaraj_ (~kanagaraj@121.244.87.117) has joined #ceph
[12:33] * kanagaraj (~kanagaraj@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:34] * sleinen1 (~Adium@2001:620:0:82::100) has joined #ceph
[12:34] * i_m (~ivan.miro@195.230.71.20) has joined #ceph
[12:34] <sep> boichev, ugly as heck... " pginfo | sed -n '/pgmap/s/.* \([0-9]* .\?\)B\/s rd.*/\1/p' | sed -e "s/K/*1000/ig;s/M/*1000*1000/i;s/G/*1000*1000*100" but exactly what i was looking for . thanks :)
[12:34] <boichev> sep :)
[12:35] * alex_ (~alex@195.88.72.204) has joined #ceph
[12:37] * alex_ (~alex@195.88.72.204) Quit ()
[12:38] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[12:40] * sleinen (~Adium@2001:620:0:82::108) has joined #ceph
[12:41] * porunov (~alex@195.88.72.204) has joined #ceph
[12:45] * b0e (~aledermue@213.95.25.82) has joined #ceph
[12:45] * wer (~wer@216.197.66.226) Quit (Ping timeout: 480 seconds)
[12:46] * sleinen1 (~Adium@2001:620:0:82::100) Quit (Read error: Connection reset by peer)
[12:49] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[12:50] * Jones (~Malcovent@76GAACFTQ.tor-irc.dnsbl.oftc.net) Quit ()
[12:50] * Moriarty (~Phase@torproxy02.31173.se) has joined #ceph
[12:52] * Concubidated (~Adium@c-50-173-245-118.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[12:53] <porunov> Hello! I have a problem to start radosgw in centos7. Documentation says: "On CentOS/RHEL systems, use ceph-radosgw. For example: sudo /etc/init.d/ceph-radosgw start" but problem is that I have nor ceph-radosgw neither any other ceph scripts in /etc/init.d/ directory. How to run ceph-radosgw in centos7? Sincerelly
[12:57] * wjw-freebsd (~wjw@ip-80-113-15-250.ip.prioritytelecom.net) has joined #ceph
[12:58] <IcePic> cent7 would have systemctl unit files instead.
[12:58] <IcePic> "systemctl list-unit-files" and look for it there.
[12:59] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[13:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[13:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[13:04] <boichev> Why is there a difference between "ceph pg map 6.6c" and "ceph pg 6.6c query" ?
[13:06] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[13:07] <porunov> IcePic, thank you for helping. But I have tried to run "systemctl start ceph-radosgw" it says: No such file or directory. With "systemctl list-unit-files" I have found "ceph-radosgw@.service". But I can not run this "systemctl start ceph-radosgw@.service" it says: Unit name ceph-radosgw@.service is not valid. Help please
[13:10] * quinoa (~quinoa@24-148-81-106.c3-0.grn-ubr2.chi-grn.il.cable.rcn.com) Quit (Remote host closed the connection)
[13:14] <alfredodeza> jamespage: I don't think we've done a 10.0.3
[13:14] <jamespage> alfredodeza, I saw it tagged in the git repo..
[13:15] <alfredodeza> what
[13:15] * alfredodeza looks
[13:15] <jamespage> alfredodeza, https://github.com/ceph/ceph/releases/tag/v10.0.3
[13:15] <alfredodeza> gees
[13:15] <alfredodeza> z
[13:15] <jamespage> hehe
[13:15] <alfredodeza> what did that
[13:15] <alfredodeza> jamespage: this is an issue because we have not done any releases
[13:15] <alfredodeza> jamespage: I will get back to you on this
[13:17] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[13:17] <jamespage> alfredodeza, thanks
[13:17] <alfredodeza> jamespage: bottom line is that we haven't cut a new release
[13:17] <alfredodeza> so there shouldn't be any tarballs :)
[13:17] <jamespage> alfredodeza, gotcha
[13:18] <jamespage> alfredodeza, are you a good person to ask about pybind link failures? struggling to get 10.0.2 packages for Ubuntu to build right now
[13:18] <jamespage> https://launchpadlibrarian.net/238532624/buildlog_ubuntu-xenial-amd64.ceph_10.0.2-1~ubuntu16.04.1~ppa201602142150_BUILDING.txt.gz
[13:20] * jluis (~joao@charybdis-ext.suse.de) has joined #ceph
[13:20] * ChanServ sets mode +o jluis
[13:20] * Moriarty (~Phase@4MJAACH7D.tor-irc.dnsbl.oftc.net) Quit ()
[13:20] * aldiyen (~PcJamesy@tor.metaether.net) has joined #ceph
[13:23] * lmb (~lmb@charybdis-ext.suse.de) has joined #ceph
[13:24] * quinoa (~quinoa@24-148-81-106.c3-0.grn-ubr2.chi-grn.il.cable.rcn.com) has joined #ceph
[13:25] <alfredodeza> jamespage: let me take a look
[13:25] <alfredodeza> you never know who might have gone through a similar failure :)
[13:26] <alfredodeza> jamespage: pybind recently moved to use PyRex are you aware of the dependencies this brought in?
[13:26] * alfredodeza thinks this is called pyrex
[13:27] <alfredodeza> Cython
[13:27] <alfredodeza> not pyrex
[13:28] <alfredodeza> jamespage: https://github.com/ceph/ceph/blob/master/debian/control#L16
[13:28] * overclk (~vshankar@121.244.87.117) Quit (Quit: Zzzzzzz...)
[13:28] <alfredodeza> hrmn the output says it is getting installed
[13:28] * sleinen (~Adium@2001:620:0:82::108) Quit (Ping timeout: 480 seconds)
[13:29] * wjw-freebsd (~wjw@ip-80-113-15-250.ip.prioritytelecom.net) Quit (Ping timeout: 480 seconds)
[13:40] * chef_ (~oftc-webi@p790356f6.tokynt01.ap.so-net.ne.jp) Quit (Ping timeout: 480 seconds)
[13:41] * erwan_ (~erwan@46.231.131.178) has joined #ceph
[13:41] <boolman> Do we have an ETA on the Jewel-stable ?
[13:45] * wjw-freebsd (~wjw@vpn.ecoracks.nl) has joined #ceph
[13:45] * erwan_ (~erwan@46.231.131.178) Quit ()
[13:50] * erwan_taf (~erwan@46.231.131.178) has joined #ceph
[13:50] * aldiyen (~PcJamesy@7V7AACNWC.tor-irc.dnsbl.oftc.net) Quit ()
[13:50] * Crisco (~Catsceo@93.115.95.204) has joined #ceph
[13:53] * bene_in_mtg (~bene@2601:18c:8501:25e4:ea2a:eaff:fe08:3c7a) has joined #ceph
[13:58] * wyang (~wyang@116.216.30.5) has joined #ceph
[13:59] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[13:59] * Geph (~Geoffrey@41.77.153.99) Quit (Ping timeout: 480 seconds)
[13:59] * quinoa (~quinoa@24-148-81-106.c3-0.grn-ubr2.chi-grn.il.cable.rcn.com) Quit (Remote host closed the connection)
[14:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[14:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[14:02] * Geph (~Geoffrey@41.77.153.99) has joined #ceph
[14:03] * ira (~ira@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:04] * sleinen (~Adium@86.12.129.15) has joined #ceph
[14:06] * sleinen1 (~Adium@2001:620:0:82::105) has joined #ceph
[14:08] <sleinen1> Today (at the OpenStack ops mid-cycle's "Ceph integration" session) I heard about the "hashpspool" flag.
[14:08] <sleinen1> Is there a way I can find out whether a specific pool in my Ceph cluster has this set?
[14:10] * shaunm (~shaunm@74.83.215.100) has joined #ceph
[14:11] * neurodrone_ (~neurodron@pool-100-35-67-57.nwrknj.fios.verizon.net) has joined #ceph
[14:11] * rdas (~rdas@121.244.87.116) has joined #ceph
[14:12] * sleinen (~Adium@86.12.129.15) Quit (Ping timeout: 480 seconds)
[14:13] <Be-El> sleinen1: 'ceph osd pool ls detail'
[14:13] * mhuang (~mhuang@116.237.136.156) has joined #ceph
[14:20] * Crisco (~Catsceo@84ZAACKLM.tor-irc.dnsbl.oftc.net) Quit ()
[14:20] * xolotl (~Frymaster@anonymous6.sec.nl) has joined #ceph
[14:21] * rdas (~rdas@121.244.87.116) Quit (Ping timeout: 480 seconds)
[14:27] <sleinen1> Be-El: Thanks! So this is actually set on all our pools it seems. Good! (I guess :-)
[14:27] * branto (~borix@ip-78-102-208-28.net.upcbroadband.cz) has joined #ceph
[14:28] <Be-El> sleinen1: i'm not sure what hashpspool is about, but it's also set for all our pools
[14:32] * swami1 (~swami@49.32.0.125) Quit (Quit: Leaving.)
[14:34] * sleinen2 (~Adium@2001:620:0:82::109) has joined #ceph
[14:35] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[14:35] * kefu|afk (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[14:35] * wyang (~wyang@116.216.30.5) Quit (Ping timeout: 480 seconds)
[14:36] * kefu (~kefu@114.92.107.250) has joined #ceph
[14:38] * sleinen1 (~Adium@2001:620:0:82::105) Quit (Ping timeout: 480 seconds)
[14:41] <sleinen2> Be-El: hashpspool was recommended by operators in the room (https://etherpad.openstack.org/p/MAN-ops-Ceph) to improve balancing across OSDs. Pools created under Firefly or later can be assumed to have it.
[14:42] * EinstCrazy (~EinstCraz@58.39.61.189) has joined #ceph
[14:42] * hr (~hr@103.50.11.146) has joined #ceph
[14:50] * dyasny (~dyasny@cable-192.222.131.135.electronicbox.net) has joined #ceph
[14:50] * xolotl (~Frymaster@4MJAACIBU.tor-irc.dnsbl.oftc.net) Quit ()
[14:50] * elt (~Jourei@76GAACFYK.tor-irc.dnsbl.oftc.net) has joined #ceph
[14:50] * mhack (~mhack@66-168-117-78.dhcp.oxfr.ma.charter.com) has joined #ceph
[14:51] * hr (~hr@103.50.11.146) Quit (Ping timeout: 480 seconds)
[14:52] <sep> hummm. i have that too. but since one osd is 50% full and another 75%?? full. i wished it was better :)
[15:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[15:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[15:02] * georgem (~Adium@206.108.127.16) has joined #ceph
[15:03] <sleinen2> @sep, I had a similar thought??? because our OSD's are irregularly utilized, I was hoping to be missing something trivial like hashpspool :-)
[15:09] <sep> sleinen2, with a single pool 4096pg and a single rbd image i had hope it would balance better too. if you learn anything let me know :)
[15:12] * neurodrone_ (~neurodron@pool-100-35-67-57.nwrknj.fios.verizon.net) Quit (Quit: neurodrone_)
[15:12] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:14] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[15:16] * GeoTracer (~Geoffrey@41.77.153.99) has joined #ceph
[15:16] * neurodrone_ (~neurodron@pool-100-35-67-57.nwrknj.fios.verizon.net) has joined #ceph
[15:17] * neurodrone_ (~neurodron@pool-100-35-67-57.nwrknj.fios.verizon.net) Quit ()
[15:17] * sleinen1 (~Adium@2001:620:0:82::102) has joined #ceph
[15:18] * Geph (~Geoffrey@41.77.153.99) Quit (Ping timeout: 480 seconds)
[15:19] * ramonskie (~oftc-webi@D97AE1BA.cm-3-3d.dynamic.ziggo.nl) has joined #ceph
[15:20] * elt (~Jourei@76GAACFYK.tor-irc.dnsbl.oftc.net) Quit ()
[15:20] * CorneliousJD|AtWork (~tokie@torland1-this.is.a.tor.exit.server.torland.is) has joined #ceph
[15:21] * rakesh__ (~rakesh@121.244.87.117) Quit (Quit: Leaving)
[15:21] * shr0p (~shr0p@24-181-197-147.dhcp.hckr.nc.charter.com) has joined #ceph
[15:21] * shr0p_ (~shr0p@24-181-197-147.dhcp.hckr.nc.charter.com) has joined #ceph
[15:21] <ramonskie> when i try to download a object or want to view a bucket from radosgw that is configured with civetweb i get "invalidbucket" when i use apache infront of it i don't get this issue. is this a known issue?
[15:22] * sleinen2 (~Adium@2001:620:0:82::109) Quit (Ping timeout: 480 seconds)
[15:23] <ramonskie> by browser
[15:24] <ramonskie> when i use dragondisk/s3cmd all works as expected
[15:25] * EinstCra_ (~EinstCraz@58.39.61.189) has joined #ceph
[15:28] * kanagaraj_ (~kanagaraj@121.244.87.117) Quit (Quit: Leaving)
[15:31] * EinstCrazy (~EinstCraz@58.39.61.189) Quit (Ping timeout: 480 seconds)
[15:33] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[15:34] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[15:34] * ereb0s (~ereb0s@107-217-135-120.lightspeed.lsvlky.sbcglobal.net) has joined #ceph
[15:36] * mhuang (~mhuang@116.237.136.156) Quit (Quit: This computer has gone to sleep)
[15:40] * quinoa (~quinoa@38.102.49.8) has joined #ceph
[15:44] * swami1 (~swami@27.7.168.228) has joined #ceph
[15:45] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[15:47] * dariol (~dlah@185.15.31.210) Quit (Quit: Ex-Chat)
[15:47] * jowilkin (~jowilkin@2601:644:4000:b0bf:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[15:48] * nooxqe (~noooxqe@host-78-145-43-224.as13285.net) has joined #ceph
[15:50] * sleinen (~Adium@86.12.129.15) has joined #ceph
[15:50] * CorneliousJD|AtWork (~tokie@4MJAACIFH.tor-irc.dnsbl.oftc.net) Quit ()
[15:54] * vbellur (~vijay@c-24-62-102-75.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[15:55] * shr0p (~shr0p@24-181-197-147.dhcp.hckr.nc.charter.com) Quit (Remote host closed the connection)
[15:55] * shr0p_ (~shr0p@24-181-197-147.dhcp.hckr.nc.charter.com) Quit (Remote host closed the connection)
[15:55] * m8x (~user@182.150.27.112) Quit (Ping timeout: 480 seconds)
[15:56] * jowilkin (~jowilkin@2601:644:4000:b0bf:ea2a:eaff:fe08:3f1d) has joined #ceph
[15:57] * sleinen1 (~Adium@2001:620:0:82::102) Quit (Ping timeout: 480 seconds)
[15:58] * sleinen (~Adium@86.12.129.15) Quit (Ping timeout: 480 seconds)
[15:58] * sleinen (~Adium@2001:620:0:82::103) has joined #ceph
[15:59] * mhuang (~mhuang@116.237.136.156) has joined #ceph
[15:59] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[16:00] * yanzheng (~zhyan@182.149.65.69) Quit (Quit: This computer has gone to sleep)
[16:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[16:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[16:04] * EinstCra_ (~EinstCraz@58.39.61.189) Quit (Remote host closed the connection)
[16:04] * EinstCrazy (~EinstCraz@58.39.61.189) has joined #ceph
[16:05] * lmb (~lmb@charybdis-ext.suse.de) Quit (Ping timeout: 480 seconds)
[16:07] * GeoTracer is now known as Geph
[16:08] <Geph> Hi, is it a bad idea to place the OS (Ubuntu) on the OSD journal SSD?
[16:08] * lmb (~lmb@tmo-099-22.customers.d1-online.com) has joined #ceph
[16:09] <Geph> Want to partition the SSD five ways, one for OS and the other four as the journals for my 4 OSDs.
[16:09] * EinstCrazy (~EinstCraz@58.39.61.189) Quit (Read error: Connection reset by peer)
[16:10] <Heebie> Geph: It's not ideal, but if your traffic is low enough to allow it, it works. It would probably be pretty wasteful to dedicated a whole SSD to a small journal partition. (just my opinion)
[16:10] * sleinen (~Adium@2001:620:0:82::103) Quit (Ping timeout: 480 seconds)
[16:10] * arthurh (~arthurh@38.101.34.1) has joined #ceph
[16:11] * ramonskie (~oftc-webi@D97AE1BA.cm-3-3d.dynamic.ziggo.nl) Quit (Quit: Page closed)
[16:11] * swami1 (~swami@27.7.168.228) Quit (Quit: Leaving.)
[16:12] * porunov (~alex@195.88.72.204) Quit (Quit: Konversation terminated!)
[16:13] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) Quit (Quit: Leaving.)
[16:13] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[16:14] <Geph> Heebie, thanks was thinking that too, and it'll save a sata port combining the journal SSD and OS onto one device
[16:14] <The1_> Geph: I've done so too
[16:14] * EinstCrazy (~EinstCraz@58.39.61.189) has joined #ceph
[16:14] * arthurh (~arthurh@38.101.34.1) Quit ()
[16:15] <Heebie> Hopefully it's a 6Gbps sata port!
[16:15] <nooxqe> Hi, I'm currently reading through the Ceph docs and it recommends upgrading the Kernel on Debian and recommends 4.1.4. Is this information still valid now or is there another version people recommend?
[16:15] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:15] <Heebie> nooxqe: It seems the later kernel version you can manage the better.
[16:15] <Geph> yes, yes
[16:15] * dugravot6 (~dugravot6@dn-infra-04.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[16:16] * lmb (~lmb@tmo-099-22.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[16:16] <Geph> Heebie, how large do you think the journal partition needs to be?
[16:17] * quinoa (~quinoa@38.102.49.8) Quit (Ping timeout: 480 seconds)
[16:17] <Heebie> Too large a partition will cause the system to response REALLY fast until the journal is full, then slow to a crawl while it empties. I've been using 5GB
[16:17] <Geph> connectivity wise planning to run 4 x 1G nics lacp
[16:17] <Geph> can't afford 10ge
[16:18] * rotbeard (~redbeard@ppp-115-87-78-63.revip4.asianet.co.th) Quit (Quit: Leaving)
[16:18] <Heebie> LACP directly from the server will add latency, but will increase bandwidth. Default will probably be just destination MAC address or just source MAC, you might want to make sure you're using a hash that accounts for both source & destination.
[16:19] <Geph> ok so splitting a 400GB SSD 5 ways (4 journals and one partition for the OS should be ample)
[16:19] * lmb (~lmb@nat.nue.novell.com) has joined #ceph
[16:20] <Heebie> My testing I've been using OS and 5x5GB partitions on the SSD's for journals on 5 3TB OSD's.
[16:21] <The1_> unless you have really really slow spinning rust between 5 or 10GB
[16:21] <Geph> ok, my hardware will support 5 OSD's with one SSD (6 sata ports)
[16:21] <The1_> I went with 10GB just "because"
[16:22] <Heebie> I read some guidelines that told me "around 4GB" so I used 5, since I had 200GB SSD's, and wanted to use at least a little bit of that.
[16:22] <The1_> ceph-deploy uses 5GB
[16:22] <Geph> thanks, 7200rpm seagate enterprise NAS
[16:22] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) has joined #ceph
[16:22] <The1_> at least in the versions I used a bit once
[16:23] <The1_> brrrrrrrrr
[16:23] <The1_> seagate
[16:23] <The1_> and..
[16:23] <The1_> no need for enterprise stuff
[16:23] <The1_> use cheapest possible
[16:23] <Geph> really
[16:23] <The1_> you need to understand that stuff WILL break
[16:23] <Heebie> ceph-deploy used whatever partitions I gave it. I didn't see any options for changing the size of a partition etc.
[16:24] <The1_> no need to splash out of enterprise-grade rust when that is something you will be loosing at some point in time
[16:24] <The1_> only the SSDs are important in that regard
[16:24] <Geph> but not the seagate SMR drives i presume
[16:24] <Heebie> The1_: There is a good reason not to use desktop drives. Their performance is SERIOUSLY crap. I had a SINGLE desktop drive slow my whole test cluster down by about 50% because it's performance was so poor.
[16:24] <Heebie> (and that is with a 5GB SSD journal in front of it.)
[16:24] <Geph> since they are the cheapest per gig
[16:24] <The1_> SMR.. eeeeks
[16:25] * ghostnote (~ggg@kbtr2ce.tor-relay.me) has joined #ceph
[16:25] <IcePic> depends on what the expected usecase is. (SMR that is)
[16:25] <The1_> Heebie: ack.. :/
[16:25] <Heebie> Bigtime ack!
[16:26] <The1_> I belive we're using samsung or something
[16:26] <The1_> MD-something models
[16:26] <Heebie> I'm now wondering if performance would vary from some kind of "prepped" partition for OSD vs. simply a symbolic link to the SSD partition device node it puts in the way I deployed them.
[16:26] <Geph> if i'm building something for just UHD video storage for my company 100 plus TB then do I even need SSD for journals?
[16:26] <The1_> yes
[16:26] <IcePic> the journals will even out writes to the spinning disks
[16:26] <The1_> you always need the journals
[16:27] * mhuang (~mhuang@116.237.136.156) Quit (Quit: This computer has gone to sleep)
[16:27] <IcePic> and send acks back faster for the writer.
[16:27] <The1_> and you always need to spend some cash on them - use enterprise grade ones
[16:27] * EinstCrazy (~EinstCraz@58.39.61.189) Quit (Read error: Connection reset by peer)
[16:27] <Geph> yea the Samsung MZ-7WD400EW 845DC Pro
[16:27] <The1_> Intel S36xx, S3700 or similar
[16:27] <The1_> og other DC editions
[16:27] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit (Quit: Leaving)
[16:27] * sleinen1 (~Adium@2001:620:0:82::104) has joined #ceph
[16:28] <IcePic> at some point, someone will want to vomit 100TB data into that shiny cluster, at that point it will be nice to handle the incoming load (since the replication and/or EC will amplify the incoming writes)
[16:28] <Geph> 10 DWPD and performance looks good
[16:28] <Heebie> If you don't add a specific journal, then writes journal to the OSD disk, then migrate from the journal to the rest of the disk... so having the SSD's makes a HUGE difference. I'm just wondering if having done something with `ceph-deploy disk` to prepare the SSD partitions before creating the OSD would make any difference.
[16:28] <The1_> samsung 840/850 PRO doesn't have a good track record (they die like flies..), but the DC ones seem promising - at least on paper.. :)
[16:28] <The1_> I went with Intel S3710 for my cluster some time ago
[16:29] <The1_> I didn't have the nerve to try out Samsuns DC editions
[16:29] <The1_> their trackrecord for high-end consumer versions (ie. 850 Pro et all) are sh*t
[16:29] <Geph> sorry to be more clear I mean to have the each journal on each 6TB OSD
[16:29] <Heebie> The ones I'm using are Dell "enterprise" grade. Not sure who actually made them.
[16:29] * bene_in_mtg (~bene@2601:18c:8501:25e4:ea2a:eaff:fe08:3c7a) Quit (Quit: Konversation terminated!)
[16:29] <DanFoster> Usually intel for Dell servers, Heebie.
[16:29] <The1_> Heebie: write intensive ones are Intel S3700
[16:29] <DanFoster> Easy to find out with dmidecode or similar.
[16:30] <Heebie> THe good Intel, or the mediocre ones? Off to a meeting.
[16:30] <The1_> but it differs over time
[16:30] <The1_> I asked my KAM for the exact same reason, but he said they wouldn't/couldn't deliver the exact same disks over time
[16:30] <Geph> ok so I could put more into the SSD if the OSD drives are cheaper
[16:31] <The1_> but I did get a datasheet for Intel S3700 for write intensive ones - and I can't remember the ones for "mixed use"
[16:31] <The1_> afk..
[16:31] <Geph> The1_, how cheap do you suggest?
[16:31] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:31] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[16:32] <Geph> thanks guys
[16:33] * arthurh (~arthurh@vpn.bigbytesystems.com) has joined #ceph
[16:35] * xarses (~xarses@64.124.158.100) has joined #ceph
[16:38] * linjan (~linjan@86.62.112.22) Quit (Ping timeout: 480 seconds)
[16:39] * EinstCrazy (~EinstCraz@58.39.61.189) has joined #ceph
[16:43] * bara (~bara@213.175.37.12) has joined #ceph
[16:47] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[16:48] * guillaume_ (~guillaume@199.91.185.156) has joined #ceph
[16:52] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[16:52] <The1_> Geph: I went with the cheapest 4TB drive with a resonable cache a local danish price-comparison site could present me with and that one of my regular suppliers had at the listed price
[16:52] <The1_> oh..
[16:52] <The1_> no.. wait
[16:53] <The1_> disk cache on the rotating rust means didly squat
[16:53] <The1_> always turn off disk cache on the rotating rust
[16:53] * tomprince (~tomprince@hermes.hocat.ca) has left #ceph
[16:53] <Geph> The1_: why only 4TB?
[16:54] <Geph> there is a "high" cost for all the supporting system components
[16:54] <The1_> and if you use disk cache on the SSDs, make sure they have "power loss prevention" or some sort of protection against loosing their cache if power fails
[16:54] * neurodrone (~neurodron@158.106.193.162) Quit (Quit: neurodrone)
[16:55] <Geph> i guess you're not looking for lots of capacity
[16:55] * ghostnote (~ggg@7V7AACN08.tor-irc.dnsbl.oftc.net) Quit ()
[16:55] * sardonyx (~offer@84ZAACKSY.tor-irc.dnsbl.oftc.net) has joined #ceph
[16:55] * neurodrone (~neurodron@158.106.193.162) has joined #ceph
[16:55] <The1_> Geph: when I created my cluster 4TB were in the sweetspot in relation to capacity/price
[16:55] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) has joined #ceph
[16:55] <Geph> i see
[16:56] <The1_> and a 4TB drive is 1,5 times faster to rebuild than a 6TB one..
[16:56] <The1_> etc etc etc..
[16:56] <Geph> true, true
[16:56] <The1_> and if I loose an entire node, rebuilding multiple 4TB drives is still faster and requires less data to flow compared to larger drives
[16:57] <Geph> Heebie, said not desktop drives
[16:57] <The1_> but it was mostly a capacity/price calculation
[16:57] <Geph> what did you deploy?
[16:57] <The1_> desktop ones
[16:57] <The1_> I never saw that in my tests
[16:57] <Geph> you run VM loads?
[16:57] <The1_> no
[16:58] <The1_> I have multiple RBD images with XFS filesystems inside that are presented to some clients via NFS
[16:58] <Geph> ok, I'll look to add SSD cacheing tier in the future for VMs
[16:58] <The1_> each filesystem holds anywhere between a few hundred files and a few GBs up to 20+ mio files and several TB data
[17:00] <The1_> afk again
[17:00] <Geph> cool
[17:00] * TheSov2 (~TheSov@cip-248.trustwave.com) has joined #ceph
[17:00] * b0e (~aledermue@213.95.25.82) Quit (Quit: Leaving.)
[17:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[17:01] <Geph> Anybody deployed Ceph on BTRFS?
[17:01] <Geph> in production..
[17:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[17:02] <TheSov2> its only supported on mirrors at the moment
[17:02] <TheSov2> so i doubt it
[17:02] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:04] <Geph> TheSov2: so you can't deploy OSD's on btrfs?
[17:06] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[17:13] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[17:15] * EinstCrazy (~EinstCraz@58.39.61.189) Quit (Remote host closed the connection)
[17:17] * drankis (~drankis__@89.111.13.198) Quit (Ping timeout: 480 seconds)
[17:17] * Concubidated (~Adium@c-50-173-245-118.hsd1.ca.comcast.net) has joined #ceph
[17:20] <SamYaple> Geph: you can
[17:20] <SamYaple> and yes people (including myself) have done it
[17:21] <SamYaple> would recommend XFS or btrfs with the latest kernel (at least 4.2)
[17:22] * sleinen2 (~Adium@2001:620:0:82::100) has joined #ceph
[17:23] * wjw-freebsd (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[17:24] * enax (~enax@hq.ezit.hu) Quit (Ping timeout: 480 seconds)
[17:24] * lmb (~lmb@nat.nue.novell.com) Quit (Ping timeout: 480 seconds)
[17:25] * sardonyx (~offer@84ZAACKSY.tor-irc.dnsbl.oftc.net) Quit ()
[17:25] * isaxi (~CoZmicShR@tor00.telenet.unc.edu) has joined #ceph
[17:25] * wushudoin (~wushudoin@38.140.108.2) has joined #ceph
[17:26] <Geph> SamYaple, thanks
[17:26] * debian112 (~bcolbert@24.126.201.64) has left #ceph
[17:27] * sleinen1 (~Adium@2001:620:0:82::104) Quit (Ping timeout: 480 seconds)
[17:27] <Geph> I've used btrfs is default FS with standard desktop and server deployments with no issue
[17:28] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[17:28] <Geph> but not sure if using btrfs with Ceph may invoke new issues
[17:28] * drankis (~drankis__@46.109.81.218) has joined #ceph
[17:28] * erwan_taf (~erwan@46.231.131.178) Quit (Ping timeout: 480 seconds)
[17:28] <SamYaple> Geph: its a complicated subject that brings out the worst in people, but to put it simply BTRFS is stable with the standard feature set
[17:28] * linuxkidd (~linuxkidd@163.sub-70-196-5.myvzw.com) has joined #ceph
[17:28] <SamYaple> it may require a bit more management than something else
[17:29] <Geph> standard include snapshots?
[17:29] <SamYaple> but hey, at least you wont get FS corruption on a power outage (im looking at you XFS....)
[17:29] <SamYaple> yea
[17:29] <SamYaple> just dont use, say, raid5/6
[17:29] <Geph> ok
[17:30] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:30] <nils_> I had btrfs return I/O errors, but that may be related to a kernel bug.
[17:30] <Geph> ok, does erasure coded pools count?
[17:30] <Geph> as raid so to speak
[17:31] <Geph> i presume not as ceph is not using the FS software raid
[17:32] <Geph> Hi nils_, not disk related?
[17:32] <Geph> or running a pool to 100%
[17:32] * sleinen2 (~Adium@2001:620:0:82::100) Quit (Read error: Connection reset by peer)
[17:32] <nils_> not disk related, btrfs scrub returned no errors, however the OSD broke.
[17:32] <nils_> it recovered but I had an OSD fail each day
[17:33] <nils_> or more
[17:33] <Geph> wow
[17:33] <nils_> wow indeed ;)
[17:35] <Geph> nils_: which kernel version?
[17:35] * jowilkin (~jowilkin@2601:644:4000:b0bf:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[17:35] <nils_> Geph, I think it was some sort of 3.19 ubuntu Frankenkernel
[17:35] <nils_> since I'm not allowed to run custom kernels my only option was to switch to XFS
[17:37] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[17:37] * dugravot61 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Quit: Leaving.)
[17:38] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[17:41] * rcernin (~rcernin@nat-pool-brq-t.redhat.com) Quit (Ping timeout: 480 seconds)
[17:43] * TMM (~hp@185.5.122.2) Quit (Quit: Ex-Chat)
[17:44] * jowilkin (~jowilkin@2601:644:4000:b0bf:ea2a:eaff:fe08:3f1d) has joined #ceph
[17:45] * lmb (~lmb@charybdis-ext.suse.de) has joined #ceph
[17:49] * sleinen (~Adium@37.203.130.34) has joined #ceph
[17:50] <Geph> SamYaple: i thought XFS was a journaling FS
[17:50] * sleinen1 (~Adium@2001:620:0:82::102) has joined #ceph
[17:50] * jluis (~joao@charybdis-ext.suse.de) Quit (Quit: leaving)
[17:51] <Geph> so should it not be "protected" from power outages the same as ext4
[17:51] * KaZeR (~KaZeR@64.201.252.132) has joined #ceph
[17:51] * muninn (~oftc-webi@m143.zih.tu-dresden.de) has joined #ceph
[17:51] <muninn> hi :)
[17:52] <Geph> not sure about btrfs though, heard it suffers badly on power loss
[17:52] <neurodrone> Is there a way to find out if http://tracker.ceph.com/issues/11347 got backported or not?
[17:53] * shr0p (~shr0p@137.118.212.221) has joined #ceph
[17:55] <Geph> nils_: what version of ceph are you running?
[17:55] * isaxi (~CoZmicShR@84ZAACKT6.tor-irc.dnsbl.oftc.net) Quit ()
[17:55] * _br_ (~dug@192.42.115.101) has joined #ceph
[17:55] * davidzlap1 (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) has joined #ceph
[17:55] <nils_> Geph, currently infernalis
[17:55] * davidzlap (~Adium@2605:e000:1313:8003:1925:5ea3:be97:70f1) Quit (Read error: Connection reset by peer)
[17:57] * sleinen (~Adium@37.203.130.34) Quit (Ping timeout: 480 seconds)
[17:57] <SamYaple> Geph: you absolutely didnt here it was bad on a power loss for btrfs. its CoW it _cant_ go bad for that
[17:57] <neurodrone> Has anyone seen the problem where zap disk fails? With something like: ???Partition(s) 1 on /dev/sdb have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.???
[17:57] <SamYaple> journaling means playing back, CoW means its commited, whats on the fs is valid
[17:58] <neurodrone> Is there any way to tell GPT that I am done with it?
[17:58] <neurodrone> Using ceph-disk ideally.
[17:58] * lmb (~lmb@charybdis-ext.suse.de) Quit (Ping timeout: 480 seconds)
[17:59] * shylesh__ (~shylesh@59.95.69.8) has joined #ceph
[18:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[18:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[18:01] <neurodrone> Oh and the device and everything are unmounted fine.
[18:01] <Geph> SamYaple: thanks, yea I'm not sure, I've never had data loss due to power loss
[18:02] <SamYaple> Geph: http://xfs.org/index.php/XFS_FAQ#Q:_Why_do_I_see_binary_NULLS_in_some_files_after_recovery_when_I_unplugged_the_power.3F
[18:02] <SamYaple> that question and the next question
[18:02] <Geph> only scared by the people who report these issues
[18:02] <SamYaple> both talk about data loss after failure
[18:03] * ircolle (~Adium@c-71-229-136-109.hsd1.co.comcast.net) has joined #ceph
[18:04] <Geph> I'm planning to deploy my production cluster on btrfs
[18:04] * i_m (~ivan.miro@195.230.71.20) Quit (Ping timeout: 480 seconds)
[18:04] <Geph> SamYaple: thanks for the link
[18:05] <SamYaple> Geph: again despite what ive siad, XFS is probably the better choice for Ceph since its more widely used and performance is almost identical
[18:05] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[18:06] <SamYaple> btrfs wins early on, but in the long run btrfs slows just like and CoW
[18:06] <SamYaple> xfs is consistent performance throughout
[18:06] * ircolle1 (~Adium@2601:285:201:2bf9:7044:399c:1cae:4920) has joined #ceph
[18:06] <SamYaple> like any* CoW filesystem
[18:07] <Be-El> on the other hand btrfs has data checksum and is able to report bit rot
[18:08] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) has joined #ceph
[18:08] * davidzlap1 (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) Quit (Read error: Connection reset by peer)
[18:08] <SamYaple> true, but so does ceph
[18:09] <nils_> neurodrone, in some cases, partprobe may work
[18:09] <neurodrone> ceph-disk already does partprobe no?
[18:09] <SamYaple> neurodrone: it does
[18:09] <SamYaple> likely you still had something using it, like device-mapper or it was mounted
[18:10] <SamYaple> run the command again and see if it errors
[18:10] * mattbenjamin (~mbenjamin@76-206-42-50.lightspeed.livnmi.sbcglobal.net) Quit (Quit: Leaving.)
[18:10] <SamYaple> it could have released
[18:10] <SamYaple> write partition again*
[18:10] * rcernin (~rcernin@77.240.179.195) has joined #ceph
[18:10] <neurodrone> I did write it again, didn???t really work. Couldn???t find anything else using it either. :(
[18:10] <SamYaple> you can track down whats using it,b ut its nornamlly quicker to reboot
[18:10] * jowilkin (~jowilkin@2601:644:4000:b0bf:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[18:10] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) Quit (Remote host closed the connection)
[18:10] <neurodrone> Yeah, I fell back on the reboot. :(
[18:11] <neurodrone> Normally disk zap handles it for me.
[18:11] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) has joined #ceph
[18:11] <neurodrone> But somehow it didn???t this time. Could be something holding onto it.
[18:11] * dgurtner (~dgurtner@nat-pool-muc-u.redhat.com) Quit (Ping timeout: 480 seconds)
[18:11] <muninn> excuse me, I have a question regarding a ceph-test-installation (I'm still very new to it) after looking trough mailing lists and googling I still wasn't able to find an answer, maybe I overlooked something but I'd like to understand what happened.
[18:11] <SamYaple> ive seen devicemapper do it alot. then theres encryption (but in the end thats still DM)
[18:11] <muninn> The problem occured when I tried to work with newer ceph versions (infernalis) (I use default Ubuntu 14.04.3 LTS installations everywhere in the testing environment)
[18:12] * davidzlap1 (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) has joined #ceph
[18:12] * LobsterRoll (~LobsterRo@174-21-204-73.tukw.qwest.net) has joined #ceph
[18:12] <muninn> The moment I tried to install my setup with one of the above mentioned versions, I got a similiar error to the one found in the ceph-users mailing list:
[18:12] * ircolle (~Adium@c-71-229-136-109.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[18:12] <Geph> Hi SamYaple, thanks good info
[18:12] <muninn> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-June/001850.html
[18:12] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) Quit (Read error: Connection reset by peer)
[18:12] <muninn> Rolling out with firefly worked just fine. For verification of the problem I tested again on a clean test environment, with the same results as above. I wasn't able to figure out by myself, what the reason for this behavior is, maybe someone here has an idea or knows something I've overseen?
[18:12] <muninn> sorry for the big post
[18:12] <neurodrone> SamYaple: Heh, ironically the reason I am doing this process is to move to dmcrypt. :)
[18:13] <SamYaple> neurodrone: is that irnoic... or the reason?!?
[18:13] <neurodrone> The OSDs weren???t on dmcrypt previously.
[18:13] <neurodrone> I am moving those now to be on them and hence zapping them and such.
[18:13] <Geph> SamYaple: I'm very used to ZFS and that's why btrfs seemed like the way to go
[18:14] <SamYaple> Geph: you can use ZFS!
[18:14] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) Quit (Max SendQ exceeded)
[18:14] <SamYaple> but again, btrfs is fine. just heed the warning and keep an eye on it
[18:14] <Geph> the checksum on data is important
[18:14] <SamYaple> Geph: ceph does handle that with scrubs and deep-scrubs, but I understand
[18:14] <Geph> ZFS, really
[18:14] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) has joined #ceph
[18:14] <SamYaple> yea there was a blog post on this too recently
[18:14] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) Quit (Max SendQ exceeded)
[18:14] <SamYaple> was pretty good
[18:15] <Geph> not ready that anywhere
[18:15] <Geph> *not read that anywhere
[18:15] <Geph> but that's ZFS on linux
[18:16] <Geph> there's a performance hit
[18:16] <Geph> surely
[18:16] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) has joined #ceph
[18:16] <Geph> and it must surely be let stable and supported than btrfs
[18:17] <SamYaple> i recomend btrfs over ZFS yes :)
[18:18] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) Quit (Max SendQ exceeded)
[18:19] * jowilkin (~jowilkin@2601:644:4000:b0bf:ea2a:eaff:fe08:3f1d) has joined #ceph
[18:19] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) has joined #ceph
[18:19] <neurodrone> Anyone seen this on activate? $ sudo ceph-disk activate /dev/sdb1
[18:19] <neurodrone> mount: unknown filesystem type 'crypto_LUKS'
[18:20] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) Quit (Max SendQ exceeded)
[18:20] <Geph> SamYaple: is CENTOS maybe the better choice over Ubuntu now that RedHat owns Ceph?
[18:20] <neurodrone> The prepare completed fine (it mentioned the host was ready for osd use)
[18:20] <neurodrone> Geph: RHEL you mean?
[18:21] <Geph> well yes
[18:21] <neurodrone> Think RHEL would still be > Centos with respect to ceph-specific patches and stuff even though they both are RH.
[18:21] <Geph> i guess,
[18:21] * kefu (~kefu@114.92.107.250) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[18:22] <SamYaple> Geph: eh i think buntu is more tested. but stick with what yoou use elsewhhere
[18:22] <Geph> so if I'm comfortable with both centos and ubuntu is there a preference?
[18:23] <Geph> i guess i'd go ubuntu since the default kernels are newer hence better if deploying btrfs
[18:24] * ceph-noob (~anonymous@12.124.18.126) has joined #ceph
[18:24] <SamYaple> i would too
[18:24] * muninn (~oftc-webi@m143.zih.tu-dresden.de) Quit (Ping timeout: 480 seconds)
[18:24] <SamYaple> but centos backports that stuff
[18:25] * _br_ (~dug@76GAACF5M.tor-irc.dnsbl.oftc.net) Quit ()
[18:25] <SamYaple> both should be solid
[18:25] * murmur1 (~Corti^car@Relay-J.tor-exit.network) has joined #ceph
[18:25] * bara (~bara@213.175.37.12) Quit (Ping timeout: 480 seconds)
[18:26] * reset11 (~reset11@141.244.134.53) Quit (Ping timeout: 480 seconds)
[18:27] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[18:27] * lcurtis_ (~lcurtis@47.19.105.250) has joined #ceph
[18:27] * nils_ (~nils_@doomstreet.collins.kg) Quit (Quit: This computer has gone to sleep)
[18:29] <Geph> thanks SamYaple
[18:34] * wer (~wer@216.197.66.226) has joined #ceph
[18:34] * bara (~bara@nat-pool-brq-t.redhat.com) has joined #ceph
[18:37] * bandrus (~brian@243.sub-70-211-67.myvzw.com) Quit (Quit: Leaving.)
[18:37] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:37] * bene_in_mtg (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[18:38] * bandrus (~brian@243.sub-70-211-67.myvzw.com) has joined #ceph
[18:41] * JustinRestivo (~Kalantal@192.193.172.176) has joined #ceph
[18:42] * Sirenia (~sirenia@454028b1.test.dnsbl.oftc.net) has joined #ceph
[18:42] * bara (~bara@nat-pool-brq-t.redhat.com) Quit (Quit: Bye guys! (??????????????????? ?????????)
[18:43] * Be-El (~blinke@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Quit: Leaving.)
[18:44] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[18:46] * dgurtner (~dgurtner@p54981A6C.dip0.t-ipconnect.de) has joined #ceph
[18:47] * bandrus (~brian@243.sub-70-211-67.myvzw.com) Quit (Ping timeout: 480 seconds)
[18:49] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[18:50] * mhack is now known as mhack|lunch
[18:50] * pabluk_ is now known as pabluk__
[18:51] <neurodrone> ugh, can???t find a cause for ???mount: unknown filesystem type 'crypto_LUKS??????.
[18:51] <neurodrone> Just made sure I have the dm_crypt kernel module loaded.
[18:53] * kanagaraj (~kanagaraj@27.7.10.108) has joined #ceph
[18:54] * sleinen1 (~Adium@2001:620:0:82::102) Quit (Ping timeout: 480 seconds)
[18:55] * murmur1 (~Corti^car@4MJAACIOZ.tor-irc.dnsbl.oftc.net) Quit ()
[18:55] * Grum (~roaet@torsrvu.snydernet.net) has joined #ceph
[18:55] * Geph (~Geoffrey@41.77.153.99) Quit (Ping timeout: 480 seconds)
[18:57] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) has joined #ceph
[18:57] * davidzlap1 (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) Quit (Read error: Connection reset by peer)
[18:59] * mattbenjamin (~mbenjamin@aa2.linuxbox.com) has joined #ceph
[19:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[19:01] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) Quit (Read error: Connection reset by peer)
[19:01] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) has joined #ceph
[19:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[19:02] * bene_in_mtg (~bene@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[19:06] * drankis (~drankis__@46.109.81.218) Quit (Ping timeout: 480 seconds)
[19:06] * agsha (~sharath.g@124.40.246.234) Quit (Remote host closed the connection)
[19:06] * squisher (~dasquishe@seeker.mcbf.net) Quit (Read error: Connection reset by peer)
[19:07] * LobsterRoll (~LobsterRo@174-21-204-73.tukw.qwest.net) Quit (Quit: LobsterRoll)
[19:07] * squisher (~dasquishe@seeker.mcbf.net) has joined #ceph
[19:08] * LobsterRoll (~LobsterRo@174-21-204-73.tukw.qwest.net) has joined #ceph
[19:10] <davidj> Curious - what do you folks use for backups? Software-wise that is.
[19:11] * cathode (~cathode@50-198-166-81-static.hfc.comcastbusiness.net) has joined #ceph
[19:12] <monsted> hashbackup can use the S3 API in ceph as a target :)
[19:12] <monsted> so can bareos
[19:14] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[19:15] * drankis (~drankis__@89.111.13.198) has joined #ceph
[19:15] * bandrus (~brian@c-50-173-245-118.hsd1.ca.comcast.net) has joined #ceph
[19:17] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[19:17] * dugravot6 (~dugravot6@4cy54-1-88-187-244-6.fbx.proxad.net) has joined #ceph
[19:18] * dugravot6 (~dugravot6@4cy54-1-88-187-244-6.fbx.proxad.net) Quit ()
[19:20] * LobsterRoll (~LobsterRo@174-21-204-73.tukw.qwest.net) Quit (Quit: LobsterRoll)
[19:23] * drankis (~drankis__@89.111.13.198) Quit (Ping timeout: 480 seconds)
[19:25] * Grum (~roaet@84ZAACKXN.tor-irc.dnsbl.oftc.net) Quit ()
[19:25] * brannmar (~demonspor@tor-exit6-readme.dfri.se) has joined #ceph
[19:26] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[19:27] * reset11 (~reset11@chello080108240025.2.14.vie.surfer.at) has joined #ceph
[19:31] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) has joined #ceph
[19:31] * davidzlap1 (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) has joined #ceph
[19:31] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) Quit (Read error: Connection reset by peer)
[19:34] * drankis (~drankis__@46.109.81.218) has joined #ceph
[19:36] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Remote host closed the connection)
[19:38] * dlan (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[19:39] * Geph (~Geoffrey@169-0-103-241.ip.afrihost.co.za) has joined #ceph
[19:39] * jordanP (~jordan@204.13-14-84.ripe.coltfrance.com) Quit (Quit: Leaving)
[19:40] * Geph (~Geoffrey@169-0-103-241.ip.afrihost.co.za) Quit ()
[19:41] * Geph (~Geoffrey@169-0-103-241.ip.afrihost.co.za) has joined #ceph
[19:48] * vata1 (~vata@207.96.182.162) has joined #ceph
[19:48] * vbellur (~vijay@c-24-62-102-75.hsd1.ma.comcast.net) has joined #ceph
[19:51] * gregsfortytwo (~gregsfort@transit-86-181-132-209.redhat.com) Quit (Ping timeout: 480 seconds)
[19:51] * gregsfortytwo (~gregsfort@transit-86-181-132-209.redhat.com) has joined #ceph
[19:54] <JustinRestivo> Anyone regularly run the ceph s3 tests? I keep exceding my allocated bucket creations even though I know I have a 500 limit.. Anyone have any thoughts?
[19:55] * brannmar (~demonspor@7V7AACN6L.tor-irc.dnsbl.oftc.net) Quit ()
[19:55] * zaitcev (~zaitcev@c-50-130-189-82.hsd1.nm.comcast.net) has joined #ceph
[19:57] * dlan (~dennis@116.228.88.131) has joined #ceph
[19:58] * shylesh__ (~shylesh@59.95.69.8) Quit (Remote host closed the connection)
[19:58] * aj__ (~aj@fw.gkh-setu.de) Quit (Ping timeout: 480 seconds)
[19:59] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[20:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[20:01] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[20:02] * kanagaraj (~kanagaraj@27.7.10.108) Quit (Quit: Leaving)
[20:02] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[20:04] * haplo37 (~haplo37@199.91.185.156) Quit (Read error: Connection reset by peer)
[20:05] * mykola (~Mikolaj@91.225.202.116) has joined #ceph
[20:06] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) has joined #ceph
[20:06] * arthurh (~arthurh@vpn.bigbytesystems.com) Quit (Quit: This computer has gone to sleep)
[20:08] * efirs (~firs@c-50-185-70-125.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[20:08] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) has joined #ceph
[20:09] * davidzlap1 (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) Quit (Read error: Connection reset by peer)
[20:10] <ceph-noob> Intrested in trying 10.0.3, but I see I have to build it myself. Ceph Blog mentions using gitbuilder.ceph.com. I am not famiar with this. Anyone can point me to some notes to get me started?
[20:11] * nooxqe (~noooxqe@host-78-145-43-224.as13285.net) Quit (Ping timeout: 480 seconds)
[20:11] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) Quit (Quit: Ex-Chat)
[20:12] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) has joined #ceph
[20:15] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) Quit ()
[20:15] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) has joined #ceph
[20:16] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) Quit ()
[20:16] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) has joined #ceph
[20:19] <neurodrone> Ugh. Activating dmcrypt volumes is the worst.
[20:19] * mrasmus (~mrasmus@mrasm.us) has left #ceph
[20:19] <neurodrone> Still unable to figure out why this fails: $ sudo ceph-disk activate /dev/sdb1
[20:19] <neurodrone> [12:22pm] neurodrone: mount: unknown filesystem type 'crypto_LUKS'
[20:25] * spate (~Kizzi@199.87.154.251) has joined #ceph
[20:26] * sleinen (~Adium@2001:620:0:82::104) has joined #ceph
[20:26] * aj__ (~aj@p578b6aa1.dip0.t-ipconnect.de) has joined #ceph
[20:27] * jowilkin (~jowilkin@2601:644:4000:b0bf:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[20:29] * JustinRestivo (~Kalantal@192.193.172.176) Quit (Quit: Leaving)
[20:30] * vbellur (~vijay@c-24-62-102-75.hsd1.ma.comcast.net) Quit (Ping timeout: 480 seconds)
[20:32] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[20:32] * cathode (~cathode@50-198-166-81-static.hfc.comcastbusiness.net) Quit (Quit: Leaving)
[20:33] * vbellur (~vijay@173-13-111-22-NewEngland.hfc.comcastbusiness.net) has joined #ceph
[20:35] * erwan_taf (~erwan@37.161.172.105) has joined #ceph
[20:36] * jowilkin (~jowilkin@2601:644:4000:b0bf:ea2a:eaff:fe08:3f1d) has joined #ceph
[20:39] * erwan_taf (~erwan@37.161.172.105) Quit (Remote host closed the connection)
[20:42] * wushudoin (~wushudoin@38.140.108.2) Quit (Ping timeout: 480 seconds)
[20:42] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) Quit (Read error: Connection reset by peer)
[20:43] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) has joined #ceph
[20:43] * sleinen (~Adium@2001:620:0:82::104) Quit (Ping timeout: 480 seconds)
[20:44] * nooxqe (~noooxqe@host-78-145-43-224.as13285.net) has joined #ceph
[20:45] * erwan_taf (~erwan@37.161.172.105) has joined #ceph
[20:49] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[20:50] * arthurh (~arthurh@65.100.24.206) has joined #ceph
[20:50] * erwan_taf (~erwan@37.161.172.105) Quit (Quit: Leaving)
[20:51] * erwan_taf (~erwan@37.161.172.105) has joined #ceph
[20:53] * wushudoin (~wushudoin@38.140.108.3) has joined #ceph
[20:53] * quinoa (~quinoa@c-76-118-182-194.hsd1.ma.comcast.net) has joined #ceph
[20:55] * spate (~Kizzi@7V7AACN71.tor-irc.dnsbl.oftc.net) Quit ()
[20:55] * kraken (~kraken@8.43.84.3) Quit (Ping timeout: 480 seconds)
[20:57] * sleinen (~Adium@2001:620:0:82::101) has joined #ceph
[21:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[21:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[21:03] * garphy is now known as garphy`aw
[21:06] * quinoa (~quinoa@c-76-118-182-194.hsd1.ma.comcast.net) Quit (Remote host closed the connection)
[21:07] * vbellur (~vijay@173-13-111-22-NewEngland.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[21:12] * shr0p_ (~shr0p@137.118.212.221) has joined #ceph
[21:14] * wushudoin_ (~wushudoin@38.140.108.2) has joined #ceph
[21:16] * sleinen (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[21:17] * bandrus (~brian@c-50-173-245-118.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[21:18] * al (d@niel.cx) Quit (Remote host closed the connection)
[21:18] * sleinen (~Adium@2001:620:0:82::101) has joined #ceph
[21:18] * bandrus (~brian@c-50-173-245-118.hsd1.ca.comcast.net) has joined #ceph
[21:19] * shr0p (~shr0p@137.118.212.221) Quit (Ping timeout: 480 seconds)
[21:21] * wushudoin (~wushudoin@38.140.108.3) Quit (Ping timeout: 480 seconds)
[21:22] * al (quassel@niel.cx) has joined #ceph
[21:23] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) Quit (Read error: Connection reset by peer)
[21:23] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) has joined #ceph
[21:25] * drankis (~drankis__@46.109.81.218) Quit (Ping timeout: 480 seconds)
[21:29] * sleinen (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[21:29] * Cue (~Kayla@exit.tor.uwaterloo.ca) has joined #ceph
[21:30] * enax (~enax@94-21-125-182.pool.digikabel.hu) has joined #ceph
[21:30] * sleinen (~Adium@2001:620:0:82::101) has joined #ceph
[21:30] * Geph (~Geoffrey@169-0-103-241.ip.afrihost.co.za) Quit (Read error: Connection reset by peer)
[21:31] * Geph (~Geoffrey@169-0-137-94.ip.afrihost.co.za) has joined #ceph
[21:34] * drankis (~drankis__@89.111.13.198) has joined #ceph
[21:39] * davidzlap1 (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) has joined #ceph
[21:39] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) Quit (Read error: Connection reset by peer)
[21:39] * garphy`aw is now known as garphy
[21:42] * linuxkidd_ (~linuxkidd@107-214-160-129.lightspeed.rcsntx.sbcglobal.net) has joined #ceph
[21:45] <bjozet> neurodrone: are you trying to run dmcrypt-volumes as OSD?
[21:46] <neurodrone> Yes.
[21:46] <neurodrone> Well, I am trying to encrypt both the OSDs and the journal.
[21:46] <bjozet> neurodrone: i have not tried it myself, I bet you need to feed the unlocked device to ceph-disk, not the underlying luks device
[21:46] <neurodrone> Sure, but I was hoping that ceph-disk activate would handle that for me.
[21:46] <bjozet> cryptsetup luksOpen /dev/mapper/yourdevice encrypted_osd_device
[21:47] <neurodrone> Interestingly the /dev/mapper doesn???t show any devices mounted on it.
[21:47] <TheSov2> i always wondered why anyone would run dmcrypt at the OSD level instead of the RBD level
[21:47] <neurodrone> TheSov2: What do you mean?
[21:48] <bjozet> neurodrone: ah, as i said, have not tried it, but suppose it would work if your "ceph-disk prepare" command contained the proper --dmcrypt stuff
[21:48] <TheSov2> i mean why slow down the entire cluster instead of just the RBD's you need encrypted
[21:48] <neurodrone> bjozet: I don???t see a command that unmounts the encrypted device from /dev/mapper either.
[21:48] <neurodrone> bjozet: Yeah, that???s what I am using while preparing; ???dmcrypt and ???dmcrypt-key-dir pointing to an appropriate destination on my boot disk.
[21:49] <bjozet> neurodrone: but as TheSov2 said; are you absolutely positive you need encryption at OSD-level and not at RBD level?
[21:49] <bjozet> :)
[21:50] <neurodrone> Yep. The RBD will not be under my control. The users who mount it on their guest instances can choose the mechanism they need for encryption. :)
[21:51] <bjozet> ok, and they trust you and your keys with their data? I wouldn't :-)
[21:51] <bjozet> if i feel i need to have disks encrypted i make sure to do it myself
[21:51] <bjozet> but i don't know your usecase :-)
[21:52] <neurodrone> Indeed. That???s hte idea. User-level encryption will be something that they will control.
[21:52] * linuxkidd_ (~linuxkidd@107-214-160-129.lightspeed.rcsntx.sbcglobal.net) Quit (Quit: This computer has gone to sleep)
[21:52] <TheSov2> neurodrone, you do understand that your cluster will move slow as molasses correct? and nested encryption doesnt sound like a good idea to me
[21:53] <neurodrone> True. But we how no idea how slow molasses move right now. :) Hence the need of trying dmcrypt out and benching.
[21:54] <neurodrone> Only if I could get this activate working.
[21:55] <neurodrone> The documentation is barely of any help here (no surprises).
[21:55] <TheSov2> i am against this union! its unholy to have storage level encryption!
[21:55] <TheSov2> blasphemer!
[21:55] <neurodrone> Yep. But surely orthogonal to my current exercise. ;)
[21:56] <neurodrone> Ceph provides a facility that I want to use and I cannot. Somewhy. Need ..to ..figure.. out.. why.
[21:57] <neurodrone> Also the cleanup fails. I see lock files strewn in my /var/lib/ceph/tmp which is terrible.
[21:57] <koollman> TheSov2: it does not have to be slow, really
[21:58] <neurodrone> I did hear about this udev race somewhere. Maybe it???s a real thing.
[21:59] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) has joined #ceph
[21:59] * oscar_fnox (~oscar@user170.217-10-117.netatonce.net) has joined #ceph
[21:59] * quinoa (~quinoa@c-76-118-182-194.hsd1.ma.comcast.net) has joined #ceph
[21:59] * Cue (~Kayla@76GAACGDW.tor-irc.dnsbl.oftc.net) Quit ()
[21:59] * adept256 (~toast@77.71.106.201) has joined #ceph
[21:59] * davidzlap1 (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) Quit (Read error: Connection reset by peer)
[22:00] * MentalRay_ (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[22:00] <koollman> TheSov2: aes-xts 256b 1920.4 MiB/s 1924.1 MiB/s (first is encoding, second is decoding). that's with just one core on a i7-2600. aes hardware acceleration helps :)
[22:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[22:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[22:01] <koollman> with a typical ceph setup, there would be one thread per osd. so ... hardly a speed problem unless you're using nvme. maybe a latency problem
[22:01] <TheSov2> BLASPHEMY I SAY!
[22:01] <neurodrone> NVMe? check.
[22:01] * kaisan (~kai@zaphod.pira.at) Quit (Quit: leaving)
[22:02] <neurodrone> Don???t think my machines will hold me down here; but in any case need the numbers to tell the story.
[22:02] <koollman> 'cryptsetup benchmark'
[22:02] <koollman> :)
[22:03] * MannerMan (~oscar@user170.217-10-117.netatonce.net) Quit (Ping timeout: 480 seconds)
[22:03] <neurodrone> Actually, I want to get everything bootstrapped and rados/rbd bench it out.
[22:03] <koollman> the i7-2600 is from 2011. you may have some improvements now :)
[22:03] <neurodrone> cryptsetup bench will test it sans ceph.
[22:03] * LeaChim (~LeaChim@host86-175-32-149.range86-175.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[22:04] <koollman> yes. but you can see if it will be slower or faster than raw disk. if it's slower, you know you have a bottleneck. if not, you are adding overhead but it may be mostly ok
[22:04] <neurodrone> Oh sure, didn???t mean to choose one over the other. :) Just want to make sure to measure latency/tput from both sides and compare.
[22:06] <bjozet> neurodrone: but what are you expecting your encrypted OSDs will protect? The only think i can think of is if someone physically steals a server or a disk.
[22:06] <neurodrone> What unmounts the mount from /dev/mapper after prepare?
[22:06] <bjozet> neurodrone: if they steal the whole server, they'll have the key to go with it anyways, unless you store it in a HSM, or enter it manually at each start-up.
[22:06] <bjozet> or am i missing something obvious?
[22:06] <koollman> depends how the key is stored
[22:07] <koollman> usually there's a passphrase on it
[22:07] * dougf_ (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) Quit (Quit: bye)
[22:07] * MentalRay (~MentalRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[22:07] * dougf (~dougf@96-38-99-179.dhcp.jcsn.tn.charter.com) has joined #ceph
[22:07] <bjozet> ok, if the admin needs to unlock it at every boot/use, sure :)
[22:07] <koollman> or in some funny setups, the key is pushed to the server on boot. or pulled by it. thus it does not work out of the intended network
[22:10] <neurodrone> Yeah. The idea is to protect the users against any potential leaks that could happen. They never have doesn???t mean we can always go with the best case scenario.
[22:10] <neurodrone> There are things we can do at provider level.
[22:10] <neurodrone> There are things user???s can do at their level and so on.
[22:10] <bjozet> ok :)
[22:10] <neurodrone> Security isn???t usually managed at a single layer in the stack.
[22:11] <neurodrone> Hopefully the perf won???t let us down. :)
[22:11] <bjozet> interesting to hear what you get though :-)
[22:11] <neurodrone> But that???s only after this thing works. Until then it???s all moot, heh.
[22:11] <bjozet> in terms of performance
[22:12] <neurodrone> I am very confused here. I don???t see what unmounts the mount from /dev/mapper here: https://github.com/ceph/ceph/blob/hammer/src/ceph-disk/ but it isn???t mounted.
[22:13] * duderonomy (~duderonom@c-24-7-50-110.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[22:13] * LeaChim (~LeaChim@host86-168-125-153.range86-168.btcentralplus.com) has joined #ceph
[22:13] <neurodrone> Does this do that? https://github.com/ceph/ceph/blob/hammer/src/ceph-disk/#L877-L881
[22:14] <bjozet> looks like it
[22:15] <neurodrone> Ugh. Odd that activate doesn???t care.
[22:15] * arthurh (~arthurh@65.100.24.206) Quit (Quit: This computer has gone to sleep)
[22:16] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) Quit (Remote host closed the connection)
[22:16] * dalegaard-39554 (~dalegaard@vps.devrandom.dk) has joined #ceph
[22:17] * erwan_taf (~erwan@37.161.172.105) Quit (Ping timeout: 480 seconds)
[22:18] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[22:19] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[22:19] * rendar (~I@host251-176-dynamic.32-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[22:20] * kaisan (~kai@zaphod.pira.at) has joined #ceph
[22:22] * rendar (~I@host251-176-dynamic.32-79-r.retail.telecomitalia.it) has joined #ceph
[22:22] * mykola (~Mikolaj@91.225.202.116) Quit (Quit: away)
[22:26] * thansen (~thansen@17.253.sfcn.org) has joined #ceph
[22:29] <neurodrone> Any idea what copies the udev rules to /etc/udev/rules.d?
[22:29] * adept256 (~toast@7V7AACOAS.tor-irc.dnsbl.oftc.net) Quit ()
[22:29] * Coe|work (~Zombiekil@tor-exit.eecs.umich.edu) has joined #ceph
[22:29] * garphy is now known as garphy`aw
[22:30] <neurodrone> # grep -r ceph /etc/udev
[22:30] <neurodrone> #
[22:30] <neurodrone> Clearly this doesn???t look right.
[22:30] * bvi (~bastiaan@152-64-132-5.ftth.glasoperator.nl) Quit (Ping timeout: 480 seconds)
[22:33] * arthurh (~arthurh@65.100.24.206) has joined #ceph
[22:35] * branto (~borix@ip-78-102-208-28.net.upcbroadband.cz) Quit (Quit: Leaving.)
[22:38] * sleinen (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[22:38] * shr0p_ (~shr0p@137.118.212.221) Quit (Ping timeout: 480 seconds)
[22:39] * MentalRay_ (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[22:40] * haasn` (~nand@p57A50E96.dip0.t-ipconnect.de) Quit (Quit: WeeChat 1.4-dev)
[22:43] * sleinen (~Adium@2001:620:0:82::101) has joined #ceph
[22:46] * drankis (~drankis__@89.111.13.198) Quit (Remote host closed the connection)
[22:49] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[22:49] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) Quit ()
[22:53] * fdmanana (~fdmanana@2001:8a0:6dfd:6d01:8943:5c4d:21a9:82cf) Quit (Ping timeout: 480 seconds)
[22:54] * jowilkin (~jowilkin@2601:644:4000:b0bf:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[22:55] <davidj> @monsted hm! going to try that. :)
[22:55] <davidj> (thanks)
[22:57] * garphy`aw is now known as garphy
[22:59] * MentalRay (~MentalRay@office-mtl1-nat-146-218-70-69.gtcomm.net) has joined #ceph
[22:59] * Coe|work (~Zombiekil@7V7AACOBV.tor-irc.dnsbl.oftc.net) Quit ()
[22:59] * ZombieL (~tZ@176.10.99.209) has joined #ceph
[23:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) Quit (Remote host closed the connection)
[23:01] * haomaiwang (~haomaiwan@li745-113.members.linode.com) has joined #ceph
[23:02] * mhack|lunch is now known as mhack|afk
[23:02] * arthurh (~arthurh@65.100.24.206) Quit (Read error: Connection reset by peer)
[23:02] * jowilkin (~jowilkin@2601:644:4000:b0bf:ea2a:eaff:fe08:3f1d) has joined #ceph
[23:04] * georgem (~Adium@206.108.127.16) has joined #ceph
[23:07] * joshd (~jdurgin@206.169.83.146) Quit (Remote host closed the connection)
[23:08] * joshd (~jdurgin@206.169.83.146) has joined #ceph
[23:08] * arthurh (~arthurh@65.100.24.206) has joined #ceph
[23:11] * ereb0s (~ereb0s@107-217-135-120.lightspeed.lsvlky.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[23:12] * sleinen (~Adium@2001:620:0:82::101) Quit (Ping timeout: 480 seconds)
[23:17] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[23:18] * garphy is now known as garphy`aw
[23:22] <monsted> has anyone here used the seagate archive (SMR) drives with ceph?
[23:23] <The1_> monsted: jeg mindes nogen har snakket om det ja
[23:23] <neurodrone> So looks like there???s an issue with udev querying on my system.
[23:23] <neurodrone> http://susepaste.org/view/raw/10207806 shows that no env var named ???ID_PART_ENTRY_TYPE??? exists on sda1.
[23:23] <neurodrone> Or at least doesn???t show up in monitor.
[23:23] <monsted> i'm pondering shoehorning a couple into a toy server to use as a backup taret
[23:23] <The1_> monsted: woops.. yes, I remember someone saying somthing about SMR once..
[23:24] * shr0p_ (~shr0p@137.118.212.221) has joined #ceph
[23:24] <The1_> afair it was slow, but it worked
[23:24] <neurodrone> https://github.com/ceph/ceph/blob/hammer/udev/95-ceph-osd.rules#L35-L43 should fail as a result of that.
[23:28] * enax (~enax@94-21-125-182.pool.digikabel.hu) Quit (Ping timeout: 480 seconds)
[23:28] * ceph-noob (~anonymous@12.124.18.126) Quit (Ping timeout: 480 seconds)
[23:28] * davidzlap1 (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) has joined #ceph
[23:28] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) Quit (Read error: Connection reset by peer)
[23:28] * johnavp1989 (~jpetrini@8.39.115.8) Quit (Ping timeout: 480 seconds)
[23:29] * ZombieL (~tZ@4MJAACI10.tor-irc.dnsbl.oftc.net) Quit ()
[23:31] * davidzlap1 (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) Quit (Read error: Connection reset by peer)
[23:31] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) has joined #ceph
[23:34] * arthurh (~arthurh@65.100.24.206) Quit (Ping timeout: 480 seconds)
[23:35] * TheSov2 (~TheSov@cip-248.trustwave.com) Quit (Read error: Connection reset by peer)
[23:36] * georgem (~Adium@206.108.127.16) Quit (Quit: Leaving.)
[23:38] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) Quit (Read error: Connection reset by peer)
[23:38] * davidzlap (~Adium@2605:e000:1313:8003:d8b5:d8f5:617e:f8b4) has joined #ceph
[23:42] * bniver (~bniver@pool-173-48-58-27.bstnma.fios.verizon.net) Quit (Remote host closed the connection)
[23:44] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) Quit (Quit: jdillaman)
[23:48] * arthurh (~arthurh@65.100.24.206) has joined #ceph
[23:52] * ceph-noob (~anonymous@12.124.18.126) has joined #ceph
[23:52] * haplo37 (~haplo37@199.91.185.156) Quit (Read error: Connection reset by peer)
[23:56] * arthurh (~arthurh@65.100.24.206) Quit (Ping timeout: 480 seconds)
[23:56] <monsted> The1_: i'm thinking the SSD journal and an SMR-aware file system should help
[23:57] <The1_> monsted: yeah - but probably with a bigger than usual journal and a bit tweaked settings in flushing
[23:59] * sleinen (~Adium@37.203.130.34) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.