#ceph IRC Log

Index

IRC Log for 2013-11-25

Timestamps are in GMT/BST.

[0:00] * xmltok (~xmltok@cpe-23-240-222-226.socal.res.rr.com) has joined #ceph
[0:03] * rendar (~s@host152-181-dynamic.7-87-r.retail.telecomitalia.it) Quit ()
[0:05] * sleinen (~Adium@2001:620:0:26:1874:6ede:fe44:2e82) Quit (Quit: Leaving.)
[0:05] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[0:09] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Read error: Operation timed out)
[0:11] * AfC (~andrew@203-219-79-122.static.tpgi.com.au) has joined #ceph
[0:11] * xmltok (~xmltok@cpe-23-240-222-226.socal.res.rr.com) Quit (Quit: Bye!)
[0:12] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[0:12] * KevinPerks1 (~Adium@97.68.216.74) has left #ceph
[0:13] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:24] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[0:35] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[0:40] * allsystemsarego (~allsystem@5-12-240-115.residential.rdsnet.ro) Quit (Quit: Leaving)
[0:42] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[0:43] * sarob (~sarob@adsl-76-252-223-232.dsl.pltn13.sbcglobal.net) Quit (Remote host closed the connection)
[0:44] * sarob (~sarob@adsl-76-252-223-232.dsl.pltn13.sbcglobal.net) has joined #ceph
[0:46] * sarob_ (~sarob@adsl-76-252-223-232.dsl.pltn13.sbcglobal.net) has joined #ceph
[0:46] * sarob (~sarob@adsl-76-252-223-232.dsl.pltn13.sbcglobal.net) Quit (Read error: Connection reset by peer)
[0:47] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:50] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[0:54] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[0:54] * KindOne (~KindOne@0001a7db.user.oftc.net) has joined #ceph
[1:02] * qoreQyaS (~qoreQyaS@ptang.nmmn.com) has joined #ceph
[1:02] <qoreQyaS> hey guys
[1:03] <qoreQyaS> i'd like to check out ceph, so iam following the storage cluster quick start guide thingie
[1:03] * KindOne (~KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:03] <qoreQyaS> when i try to add the monitor node the key doesnt get generated
[1:03] <qoreQyaS> the python process doesnt finish it seems
[1:03] <qoreQyaS> and hints?
[1:04] * KindOne (KindOne@0001a7db.user.oftc.net) has joined #ceph
[1:04] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[1:06] * yanzheng (~zhyan@jfdmzpr02-ext.jf.intel.com) has joined #ceph
[1:07] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Read error: Operation timed out)
[1:08] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:13] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[1:16] * ircolle (~Adium@2601:1:8380:2d9:5cff:2113:3f42:40fd) has joined #ceph
[1:17] <aarontc> qoreQyaS: are you using ceph-deploy?
[1:18] * yanzheng (~zhyan@jfdmzpr02-ext.jf.intel.com) Quit (Remote host closed the connection)
[1:18] * sarob_ (~sarob@adsl-76-252-223-232.dsl.pltn13.sbcglobal.net) Quit (Remote host closed the connection)
[1:18] * sarob (~sarob@adsl-76-252-223-232.dsl.pltn13.sbcglobal.net) has joined #ceph
[1:19] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
[1:20] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:21] * mozg (~andrei@host81-151-251-29.range81-151.btcentralplus.com) has joined #ceph
[1:23] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[1:26] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:26] * sarob (~sarob@adsl-76-252-223-232.dsl.pltn13.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[1:30] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[1:40] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[1:42] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[1:42] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[1:48] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[1:53] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) has joined #ceph
[1:55] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[1:56] * DarkAce-Z (~BillyMays@50.107.53.200) has joined #ceph
[1:58] * LeaChim (~LeaChim@host86-162-2-255.range86-162.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:00] * DarkAceZ (~BillyMays@50.107.53.200) Quit (Ping timeout: 480 seconds)
[2:09] * glzhao (~glzhao@118.195.65.67) has joined #ceph
[2:10] * Cube (~Cube@66-87-67-172.pools.spcsdns.net) has joined #ceph
[2:11] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[2:12] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[2:12] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[2:18] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[2:21] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[2:24] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) Quit (Ping timeout: 480 seconds)
[2:26] * qoreQyaS (~qoreQyaS@ptang.nmmn.com) Quit (Ping timeout: 480 seconds)
[2:26] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:27] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:29] * KevinPerks (~Adium@97.68.216.74) has joined #ceph
[2:29] * sarob (~sarob@2601:9:7080:13a:6122:67da:e7a8:7597) has joined #ceph
[2:32] * shang (~ShangWu@175.41.48.77) has joined #ceph
[2:37] * sarob (~sarob@2601:9:7080:13a:6122:67da:e7a8:7597) Quit (Ping timeout: 480 seconds)
[2:40] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[2:42] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[2:48] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[2:50] * jnq (~jon@gruidae.jonquinn.com) Quit (Ping timeout: 480 seconds)
[2:55] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:56] * KindOne (KindOne@0001a7db.user.oftc.net) has joined #ceph
[2:56] * diegows (~diegows@190.190.11.42) has joined #ceph
[2:56] * rongze (~rongze@117.79.232.229) has joined #ceph
[2:59] * rongze_ (~rongze@118.186.151.57) has joined #ceph
[2:59] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[3:00] * KevinPerks (~Adium@97.68.216.74) Quit (Quit: Leaving.)
[3:04] * AfC (~andrew@203-219-79-122.static.tpgi.com.au) Quit (Remote host closed the connection)
[3:04] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[3:05] * rongze (~rongze@117.79.232.229) Quit (Ping timeout: 480 seconds)
[3:07] * AfC (~andrew@203-219-79-122.static.tpgi.com.au) has joined #ceph
[3:08] * wenjianhn (~wenjianhn@111.196.85.201) has joined #ceph
[3:09] * mozg (~andrei@host81-151-251-29.range81-151.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[3:12] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[3:16] * mwarwick (~mwarwick@110-174-133-236.static.tpgi.com.au) has joined #ceph
[3:16] * sarob (~sarob@2601:9:7080:13a:852a:24c7:1c45:58b6) has joined #ceph
[3:22] * diegows (~diegows@190.190.11.42) Quit (Ping timeout: 480 seconds)
[3:24] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[3:40] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[3:42] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[3:45] * r0r_taga (~nick@greenback.pod4.org) Quit (Ping timeout: 480 seconds)
[3:48] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[3:50] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[3:52] * sarob (~sarob@2601:9:7080:13a:852a:24c7:1c45:58b6) Quit (Remote host closed the connection)
[3:53] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[4:01] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:01] * Cube (~Cube@66-87-67-172.pools.spcsdns.net) Quit (Quit: Leaving.)
[4:04] * ircolle (~Adium@2601:1:8380:2d9:5cff:2113:3f42:40fd) Quit (Quit: Leaving.)
[4:09] * rongze_ (~rongze@118.186.151.57) Quit (Remote host closed the connection)
[4:11] * Dark-Ace-Z (~BillyMays@50.107.53.200) has joined #ceph
[4:12] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[4:16] * DarkAce-Z (~BillyMays@50.107.53.200) Quit (Ping timeout: 480 seconds)
[4:17] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[4:19] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[4:19] * Dark-Ace-Z is now known as DarkAceZ
[4:22] * rongze (~rongze@117.79.232.236) has joined #ceph
[4:27] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[4:28] * `10_ (~10@juke.fm) Quit (Ping timeout: 480 seconds)
[4:29] * `10_ (~10@juke.fm) has joined #ceph
[4:29] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[4:41] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[4:42] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[4:43] * Cube (~Cube@66-87-67-172.pools.spcsdns.net) has joined #ceph
[4:43] * wenjianhn (~wenjianhn@111.196.85.201) Quit (Ping timeout: 480 seconds)
[4:49] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[4:54] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[5:02] * r0r_taga (~nick@greenback.pod4.org) has joined #ceph
[5:03] * yy-nm (~Thunderbi@58.100.73.58) has joined #ceph
[5:03] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[5:05] * yy-nm (~Thunderbi@58.100.73.58) Quit ()
[5:07] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Read error: Operation timed out)
[5:11] * Hakisho (~Hakisho@0001be3c.user.oftc.net) Quit (Ping timeout: 480 seconds)
[5:11] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:12] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[5:15] * Hakisho (~Hakisho@0001be3c.user.oftc.net) has joined #ceph
[5:19] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[5:20] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[5:27] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:39] * wenjianhn (~wenjianhn@222.129.33.203) has joined #ceph
[5:41] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[5:42] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[5:49] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[5:50] * rongze (~rongze@117.79.232.236) Quit (Remote host closed the connection)
[5:51] * Sodo (~Sodo@a88-113-108-239.elisa-laajakaista.fi) has joined #ceph
[5:53] * rongze (~rongze@117.79.232.204) has joined #ceph
[5:56] * wenjianhn (~wenjianhn@222.129.33.203) Quit (Quit: Leaving)
[5:58] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[6:00] * scuttlemonkey (~scuttlemo@c-69-244-181-5.hsd1.mi.comcast.net) has joined #ceph
[6:00] * ChanServ sets mode +o scuttlemonkey
[6:06] * paveraware (~tomc@75-162-211-132.slkc.qwest.net) has joined #ceph
[6:08] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[6:12] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[6:14] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:20] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[6:22] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[6:23] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[6:23] * haomaiwang (~haomaiwan@117.79.232.229) Quit (Read error: Connection reset by peer)
[6:25] * haomaiwang (~haomaiwan@211.155.113.217) has joined #ceph
[6:27] * ircolle (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit (Quit: Leaving.)
[6:28] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[6:28] <paveraware> well.. topro, putting an FS on the ssd helps, but mostly because I can call fstrim and make the ssd fast again. It really seems like having journals on ssds is *not* a very good solution
[6:29] <paveraware> just doing a few benchmarks, and largish writes fills up the ssd, basically every 30 minutes I need to run fstrim
[6:30] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[6:32] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[6:38] * xinxinsh (~xinxinsh@jfdmzpr06-ext.jf.intel.com) has joined #ceph
[6:41] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[6:43] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[6:48] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[6:49] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[6:50] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[6:51] * sarob (~sarob@2601:9:7080:13a:9cac:904b:e728:f1e4) has joined #ceph
[6:56] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[6:56] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
[6:59] * sarob (~sarob@2601:9:7080:13a:9cac:904b:e728:f1e4) Quit (Ping timeout: 480 seconds)
[7:01] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[7:02] * paveraware (~tomc@75-162-211-132.slkc.qwest.net) Quit (Quit: paveraware)
[7:03] * sarob (~sarob@2601:9:7080:13a:a4ac:5a26:6088:8907) has joined #ceph
[7:12] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[7:13] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[7:16] * mwarwick (~mwarwick@110-174-133-236.static.tpgi.com.au) Quit (Remote host closed the connection)
[7:17] * AfC (~andrew@203-219-79-122.static.tpgi.com.au) Quit (Quit: Leaving.)
[7:18] * mwarwick (~mwarwick@110-174-133-236.static.tpgi.com.au) has joined #ceph
[7:19] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Read error: Operation timed out)
[7:26] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[7:38] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) Quit (Quit: Leaving.)
[7:40] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[7:42] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[7:52] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[8:06] <topro> paveraware: thanks for the feedback, i'll keep that in mind for my cluster, but I think I won't be of very much help for this issue from that poiunt on as I have very few experience with ssds
[8:07] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Read error: Operation timed out)
[8:19] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:20] * KindOne (KindOne@0001a7db.user.oftc.net) has joined #ceph
[8:20] * mattt_ (~textual@94.236.7.190) has joined #ceph
[8:20] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[8:21] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[8:23] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[8:27] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) has joined #ceph
[8:30] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[8:36] * sleinen (~Adium@2001:620:0:26:a82d:751b:309d:1903) has joined #ceph
[8:39] * sarob (~sarob@2601:9:7080:13a:a4ac:5a26:6088:8907) Quit (Remote host closed the connection)
[8:39] * sarob (~sarob@2601:9:7080:13a:a4ac:5a26:6088:8907) has joined #ceph
[8:42] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[8:43] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Read error: Operation timed out)
[8:47] * KindTwo (KindOne@h76.32.28.71.dynamic.ip.windstream.net) has joined #ceph
[8:47] * Sysadmin88 (~IceChat77@94.1.37.151) Quit (Quit: For Sale: Parachute. Only used once, never opened, small stain.)
[8:47] * sarob (~sarob@2601:9:7080:13a:a4ac:5a26:6088:8907) Quit (Ping timeout: 480 seconds)
[8:48] * i_m (~ivan.miro@deibp9eh1--blueice1n2.emea.ibm.com) has joined #ceph
[8:49] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:49] * KindTwo is now known as KindOne
[8:56] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[8:57] * tobru_ (~quassel@2a02:41a:3999::94) Quit (Remote host closed the connection)
[8:58] * tobru (~quassel@2a02:41a:3999::94) has joined #ceph
[8:59] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:02] * dvanders (~dvanders@pb-d-128-141-237-53.cern.ch) has joined #ceph
[9:07] * ksingh (~Adium@2001:708:10:10:b45b:46f3:f6f4:c809) has joined #ceph
[9:11] * sleinen1 (~Adium@2001:620:0:46:e0e8:4110:4a62:efee) has joined #ceph
[9:12] * rendar (~s@host100-181-dynamic.3-87-r.retail.telecomitalia.it) has joined #ceph
[9:13] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[9:18] * sleinen (~Adium@2001:620:0:26:a82d:751b:309d:1903) Quit (Ping timeout: 480 seconds)
[9:20] * xdeller_ (~xdeller@95-31-29-125.broadband.corbina.ru) Quit (Quit: Leaving)
[9:21] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[9:39] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[9:40] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[9:42] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[9:47] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[9:48] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[9:48] * ChanServ sets mode +v andreask
[9:48] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[10:01] * fouxm (~fouxm@185.23.92.11) has joined #ceph
[10:02] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[10:03] * allsystemsarego (~allsystem@5-12-240-115.residential.rdsnet.ro) has joined #ceph
[10:08] * xinxinsh (~xinxinsh@jfdmzpr06-ext.jf.intel.com) Quit (Ping timeout: 480 seconds)
[10:10] * ScOut3R (~ScOut3R@212.96.46.212) has joined #ceph
[10:11] * yanzheng (~zhyan@jfdmzpr03-ext.jf.intel.com) Quit (Remote host closed the connection)
[10:12] * ShaunR (~ShaunR@staff.ndchost.com) Quit (Ping timeout: 480 seconds)
[10:17] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:19] * thomnico (~thomnico@2a01:e35:8b41:120:2959:bfbc:3e8e:4c2e) has joined #ceph
[10:20] * joshd (~joshd@2607:f298:a:607:712e:7b76:e75c:ce84) Quit (Ping timeout: 480 seconds)
[10:24] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[10:29] * joshd (~joshd@2607:f298:a:607:c80c:daea:cd25:cabf) has joined #ceph
[10:39] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[10:40] * mwarwick (~mwarwick@110-174-133-236.static.tpgi.com.au) Quit (Remote host closed the connection)
[10:44] * mozg (~andrei@host81-151-251-29.range81-151.btcentralplus.com) has joined #ceph
[10:44] * dvanders_ (~dvanders@137.138.33.84) has joined #ceph
[10:46] * dvanders (~dvanders@pb-d-128-141-237-53.cern.ch) Quit (Ping timeout: 480 seconds)
[10:46] * dvanders_ is now known as dvanders
[10:49] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[10:52] * LeaChim (~LeaChim@86.162.2.255) has joined #ceph
[10:55] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[10:57] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[10:59] * ksingh (~Adium@2001:708:10:10:b45b:46f3:f6f4:c809) Quit (Quit: Leaving.)
[11:01] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[11:04] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[11:04] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[11:06] * arne_ (~wiebalck@arnemacbook.cern.ch) has joined #ceph
[11:06] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Remote host closed the connection)
[11:06] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[11:07] * arne_ (~wiebalck@arnemacbook.cern.ch) Quit ()
[11:08] * Arne_ (~wiebalck@arnemacbook.cern.ch) has joined #ceph
[11:09] * Arne_ (~wiebalck@arnemacbook.cern.ch) has left #ceph
[11:09] * wiebalck (~wiebalck@arnemacbook.cern.ch) has joined #ceph
[11:13] * rongze (~rongze@117.79.232.204) Quit (Remote host closed the connection)
[11:13] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[11:24] * Kioob`Taff (~plug-oliv@local.plusdinfo.com) has joined #ceph
[11:24] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[11:27] * wiebalck (~wiebalck@arnemacbook.cern.ch) Quit (Quit: wiebalck)
[11:40] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[11:43] * yanzheng (~zhyan@134.134.139.72) has joined #ceph
[11:54] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[11:56] * micha_ (~micha@hyper1.noris.net) has joined #ceph
[11:58] * jnq (~jon@gruidae.jonquinn.com) has joined #ceph
[12:01] * wiebalck (~wiebalck@arnemacbook.cern.ch) has joined #ceph
[12:03] * wiebalck_ (~awiebalck@macafs.cern.ch) has joined #ceph
[12:04] * wiebalck (~wiebalck@arnemacbook.cern.ch) Quit ()
[12:04] * wiebalck_ is now known as wiebalck
[12:07] * thomnico (~thomnico@2a01:e35:8b41:120:2959:bfbc:3e8e:4c2e) Quit (Quit: Ex-Chat)
[12:08] * thomnico (~thomnico@2a01:e35:8b41:120:2959:bfbc:3e8e:4c2e) has joined #ceph
[12:09] * thomnico (~thomnico@2a01:e35:8b41:120:2959:bfbc:3e8e:4c2e) Quit ()
[12:10] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[12:16] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Quit: Siva)
[12:17] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[12:18] * erice_ (~erice@75-166-0-226.hlrn.qwest.net) has joined #ceph
[12:18] * thomnico (~thomnico@2a01:e35:8b41:120:2959:bfbc:3e8e:4c2e) has joined #ceph
[12:20] * mozg (~andrei@host81-151-251-29.range81-151.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[12:21] * erice__ (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[12:23] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[12:24] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[12:25] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[12:26] * erice_ (~erice@75-166-0-226.hlrn.qwest.net) Quit (Ping timeout: 480 seconds)
[12:35] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[12:40] * sarob (~sarob@2601:9:7080:13a:94ce:9274:bf90:d5cf) has joined #ceph
[12:42] * thomnico (~thomnico@2a01:e35:8b41:120:2959:bfbc:3e8e:4c2e) Quit (Quit: Ex-Chat)
[12:44] * thomnico (~thomnico@2a01:e35:8b41:120:c997:4318:36e4:a3c8) has joined #ceph
[12:48] * sarob (~sarob@2601:9:7080:13a:94ce:9274:bf90:d5cf) Quit (Ping timeout: 480 seconds)
[12:48] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Read error: Operation timed out)
[12:50] * thomnico (~thomnico@2a01:e35:8b41:120:c997:4318:36e4:a3c8) Quit (Quit: Ex-Chat)
[12:52] * thomnico (~thomnico@2a01:e35:8b41:120:c997:4318:36e4:a3c8) has joined #ceph
[13:01] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[13:09] * sleinen (~Adium@2001:620:0:46:2cb1:74a9:3b74:c20a) has joined #ceph
[13:16] * sleinen1 (~Adium@2001:620:0:46:e0e8:4110:4a62:efee) Quit (Ping timeout: 480 seconds)
[13:18] * neonDragon (~neonDrago@host86-168-87-229.range86-168.btcentralplus.com) has joined #ceph
[13:22] * Siva_ (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[13:24] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[13:24] * rongze (~rongze@117.79.232.236) has joined #ceph
[13:25] * shang (~ShangWu@175.41.48.77) Quit (Remote host closed the connection)
[13:26] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[13:26] * Siva_ is now known as Siva
[13:27] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[13:33] * thomnico (~thomnico@2a01:e35:8b41:120:c997:4318:36e4:a3c8) Quit (Ping timeout: 480 seconds)
[13:38] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[13:40] * sarob (~sarob@2601:9:7080:13a:85e4:e7dc:f103:1e3b) has joined #ceph
[13:40] * sleinen1 (~Adium@2001:620:0:25:583c:ab85:5ca7:4471) has joined #ceph
[13:45] * sleinen (~Adium@2001:620:0:46:2cb1:74a9:3b74:c20a) Quit (Ping timeout: 480 seconds)
[13:47] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[13:54] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[13:55] * KevinPerks (~Adium@rrcs-67-78-170-22.se.biz.rr.com) has joined #ceph
[14:01] * sleinen (~Adium@130.59.94.252) has joined #ceph
[14:02] * sleinen2 (~Adium@2001:620:0:25:405f:4be5:ca6a:416f) has joined #ceph
[14:03] * xinxinsh (~xinxinsh@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[14:03] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[14:04] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[14:05] * diegows (~diegows@190.190.11.42) has joined #ceph
[14:08] * sleinen1 (~Adium@2001:620:0:25:583c:ab85:5ca7:4471) Quit (Ping timeout: 480 seconds)
[14:09] * sleinen (~Adium@130.59.94.252) Quit (Ping timeout: 480 seconds)
[14:10] * KevinPerks1 (~Adium@97.68.216.74) has joined #ceph
[14:11] * odyssey4me (~odyssey4m@41.75.201.126) has joined #ceph
[14:16] * KevinPerks (~Adium@rrcs-67-78-170-22.se.biz.rr.com) Quit (Ping timeout: 480 seconds)
[14:17] * sarob (~sarob@2601:9:7080:13a:85e4:e7dc:f103:1e3b) Quit (Ping timeout: 480 seconds)
[14:18] <micha_> hi, I have a question about radosgw, Im trying to set public read access to an object and/or bucket. I tried s3cmd and s3browser, to do this whithout success.
[14:18] <micha_> is this even possible with radosgw?
[14:20] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[14:25] * rongze (~rongze@117.79.232.236) Quit (Remote host closed the connection)
[14:25] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) Quit (Read error: Operation timed out)
[14:26] * thomnico (~thomnico@2a01:e35:8b41:120:c997:4318:36e4:a3c8) has joined #ceph
[14:34] * themgt (~themgt@pc-188-95-160-190.cm.vtr.net) has joined #ceph
[14:34] <baffle> I'm trying to wrap my head around cephx keyrings; I find a bazillion conflicting instructions. Is there definitive documentation on what files should be populated with wich keys?
[14:34] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[14:36] * xinxinsh (~xinxinsh@jfdmzpr04-ext.jf.intel.com) Quit (Quit: Leaving)
[14:36] * xinxinsh (~xinxinsh@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[14:37] * yanzheng (~zhyan@134.134.139.72) Quit (Ping timeout: 480 seconds)
[14:37] * hybrid512 (~walid@LPoitiers-156-86-25-85.w193-248.abo.wanadoo.fr) has joined #ceph
[14:38] * xinxinsh (~xinxinsh@jfdmzpr04-ext.jf.intel.com) Quit ()
[14:40] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[14:40] * sarob (~sarob@2601:9:7080:13a:609b:d7bf:dcff:3a6b) has joined #ceph
[14:40] <micha_> i think you need at least one admin and one mon key
[14:42] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[14:42] <baffle> micha_: Should they be in separate files like /etc/ceph/ceph.client.admin.keyring ceph.mon.keyring and *also* both in /etc/ceph/keyring ? Or other names? :)
[14:42] <micha_> i tried to do things whithout ceph-deploy, and used this: http://ceph.com/docs/master/dev/mon-bootstrap/ to generate my keys
[14:43] <micha_> i think /etc/ceph/keyring should be sufficiant
[14:43] <baffle> micha_: Oh, so just one keyring?
[14:44] <micha_> i think so
[14:44] <baffle> When I do ceph-mon --mkfs, do I use -i (to import keyring) or --keyring (file with multiple keys) or..? :)
[14:46] <micha_> -i is for mon-id, not import
[14:46] <micha_> you need -i mon-id and --keyring (file with multiple keys)
[14:48] * sarob (~sarob@2601:9:7080:13a:609b:d7bf:dcff:3a6b) Quit (Ping timeout: 480 seconds)
[14:48] <baffle> The OSDs keys are just in the $osd_data/keyring .. Do they need any other keys or is it okay with the one they generate upon mkfs?
[14:49] * KevinPerks1 (~Adium@97.68.216.74) Quit (Quit: Leaving.)
[14:50] <micha_> the generated ones should be enough
[14:51] * alaind (~dechorgna@161.105.182.35) has joined #ceph
[14:52] <micha_> i don't realy understand this myself, i got it working somehow, but i may have build a security nightmare
[14:52] * yanzheng (~zhyan@134.134.137.75) has joined #ceph
[14:53] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[14:54] <baffle> Yeah; I'd really like a document explaining things like this better. :)
[14:54] <micha_> i used this to configure my osds: http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
[14:54] <baffle> I've also understood that there are some kind of cephx keyserver? But I've not really found much information about it. :)
[14:56] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[14:57] <micha_> problem is, that all documentation is migrated to ceph-deploy, if you want to do things yourself you have to search realy hard to get informations
[15:00] * linuxkidd (~linuxkidd@cpe-066-057-061-231.nc.res.rr.com) has joined #ceph
[15:04] * rongze (~rongze@117.79.232.204) has joined #ceph
[15:04] * alfredodeza (~alfredode@c-98-194-83-79.hsd1.tx.comcast.net) has joined #ceph
[15:06] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[15:07] <baffle> Yeah, I want to actually know what is really happening under the hood. :)
[15:09] * wogri_risc (~wogri_ris@ro.risc.uni-linz.ac.at) Quit (Quit: wogri_risc)
[15:09] * odyssey4me (~odyssey4m@41.75.201.126) Quit (Quit: odyssey4me)
[15:12] * nhm (~nhm@184-97-230-34.mpls.qwest.net) has joined #ceph
[15:12] * ChanServ sets mode +o nhm
[15:14] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[15:15] * glzhao (~glzhao@118.195.65.67) Quit (Quit: leaving)
[15:16] <micha_> i think you are right about this keyserver, "ceph auth list" shows all my keys except the mon key
[15:16] * yanzheng (~zhyan@134.134.137.75) Quit (Remote host closed the connection)
[15:16] * markbby (~Adium@168.94.245.3) has joined #ceph
[15:18] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:b866:9cfa:f37d:c363) has joined #ceph
[15:22] * madkiss (~madkiss@2001:6f8:12c3:f00f:9117:e2d:a91f:1f27) Quit (Ping timeout: 480 seconds)
[15:22] * yanzheng (~zhyan@134.134.137.75) has joined #ceph
[15:24] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[15:24] * BillK (~BillK-OFT@58-7-79-238.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:26] * liiwi (liiwi@idle.fi) Quit (Remote host closed the connection)
[15:26] * rongze (~rongze@117.79.232.204) Quit (Remote host closed the connection)
[15:27] * liiwi (~liiwi@idle.fi) has joined #ceph
[15:30] * thomnico (~thomnico@2a01:e35:8b41:120:c997:4318:36e4:a3c8) Quit (Quit: Ex-Chat)
[15:31] * rongze_ (~rongze@117.79.232.204) has joined #ceph
[15:32] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[15:39] * jo0nas (~jonas@188-183-5-254-static.dk.customer.tdc.net) Quit (Quit: Leaving.)
[15:40] * sarob (~sarob@2601:9:7080:13a:6cea:6944:a11d:f979) has joined #ceph
[15:41] * yanzheng (~zhyan@134.134.137.75) Quit (Ping timeout: 480 seconds)
[15:47] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[15:48] * sarob (~sarob@2601:9:7080:13a:6cea:6944:a11d:f979) Quit (Ping timeout: 480 seconds)
[15:48] * andreask (~andreask@138.232.7.65) has joined #ceph
[15:48] * ChanServ sets mode +v andreask
[15:49] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[15:53] * thomnico (~thomnico@2a01:e35:8b41:120:c997:4318:36e4:a3c8) has joined #ceph
[15:54] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[15:55] * sleinen2 (~Adium@2001:620:0:25:405f:4be5:ca6a:416f) Quit (Ping timeout: 480 seconds)
[15:58] * thomnico (~thomnico@2a01:e35:8b41:120:c997:4318:36e4:a3c8) Quit ()
[15:59] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:59] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[16:01] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) has joined #ceph
[16:02] * nwf (~nwf@67.62.51.95) Quit (Read error: Connection reset by peer)
[16:03] * nwf (~nwf@67.62.51.95) has joined #ceph
[16:05] <hughsaunders> Hey, is it possible to rollback a whole rados pool to a snapshot? or create a new pool from an existing pool snapshot?
[16:07] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Ping timeout: 480 seconds)
[16:09] * mattbenjamin (~matt@aa2.linuxbox.com) has joined #ceph
[16:11] <mattbenjamin> pointers to youtube stream?
[16:11] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[16:11] * mattbenjamin (~matt@aa2.linuxbox.com) has left #ceph
[16:18] <linuxkidd> hughsaunders: http://ceph.com/docs/master/rbd/rbd-snapshot/
[16:18] <linuxkidd> Look at the section titled 'Layering'
[16:19] <linuxkidd> wait.. you're talking about the entire pool?
[16:19] <hughsaunders> linuxkidd: yeah, thats RBD snapshots
[16:20] * jskinner (~jskinner@69.170.148.179) has joined #ceph
[16:20] <hughsaunders> I just thought it was weird that the rados command has "mksnap <snap-name>" for snapshotting a whole pool, but the rollback command operates on individual objects "rollback <obj-name> <snap-name>".
[16:21] <linuxkidd> right... I'm looking at the code now (no promises)...
[16:21] * gregmark (~Adium@68.87.42.115) has joined #ceph
[16:21] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[16:24] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[16:25] * andreask (~andreask@138.232.7.65) Quit (Ping timeout: 480 seconds)
[16:26] * sleinen (~Adium@2001:620:0:2d:b4db:56b:38c9:e9b0) has joined #ceph
[16:26] <micha_> hi, I have a question about radosgw, Im trying to set public read access to an object and/or bucket. I tried s3cmd and s3browser, to do this whithout success. Is this even possible with radosgw?
[16:31] * linuxkidd_ (~linuxkidd@cpe-066-057-061-231.nc.res.rr.com) has joined #ceph
[16:31] <dmsimard> leseb: Do you have any experience with ceph as object store with Openstack (e.g, horizon) ? Don't see much on your blog or talks about it :D
[16:32] <leseb> dmsimard: do you mean rgw with openstack?
[16:33] <dmsimard> leseb: Yeah
[16:33] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[16:34] <leseb> dmsimard: not that much experience sorry, but the keystone integration is not that good (doesn't support PKI)
[16:34] * sleinen (~Adium@2001:620:0:2d:b4db:56b:38c9:e9b0) Quit (Ping timeout: 480 seconds)
[16:35] <leseb> dmsimard: but regarding openstack we are working on the swift-multi backend, which means that we can have swift talking to RADOS
[16:35] * linuxkidd (~linuxkidd@cpe-066-057-061-231.nc.res.rr.com) Quit (Ping timeout: 480 seconds)
[16:35] <dmsimard> leseb: Instead of going through rgw ?
[16:35] <dmsimard> leseb: That's interesting
[16:35] * markbby (~Adium@168.94.245.3) has joined #ceph
[16:35] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[16:35] <leseb> dmsimard: yes, we just want to use swift since it's well integrated into openstack and probably less confusing for the users
[16:36] * sleinen (~Adium@2001:620:0:25:2823:a5d:dc3e:2b7b) has joined #ceph
[16:36] <dmsimard> leseb: Is there any blueprints ?
[16:36] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[16:36] <leseb> dmsimard: well not yet, but we started to work on this, I'll write a bp soon
[16:36] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[16:37] <dmsimard> leseb: Awesome, I'd probably want to contribute in some shape or form :D
[16:37] <leseb> dmsimard: I'll keep you inform then :)
[16:37] <leseb> *informed
[16:37] <dmsimard> Thanks
[16:39] * fouxm_ (~fouxm@185.23.92.11) has joined #ceph
[16:39] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[16:39] * ChanServ sets mode +v andreask
[16:40] * linuxkidd_ is now known as linuxkidd
[16:40] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[16:42] * sagelap (~sage@2600:1012:b005:ce83:6c38:5203:39d7:100f) has joined #ceph
[16:42] <baffle> Soo. A MON instance has a $mon_data/keyring file, with a [mon.] key = key.. And I have a /etc/ceph/keyring with that key + client.admin key. Should I now be able to auth to the mon instance from the same host? "ceph health" gives me "Error EACCES: access denied"
[16:43] <linuxkidd> hughsaunders: Ya, I'm not finding anything to operate on the whole pool at once regarding rolling back a snapshot..
[16:43] * mtanski (~mtanski@cpe-66-68-155-199.austin.res.rr.com) has joined #ceph
[16:43] <linuxkidd> I also don't see anything about an ability to run a copy on write instance for a pool
[16:43] <linuxkidd> hughsaunders: my best recommendation for rolling back a pool would be to do a loop on all objects...
[16:45] <linuxkidd> Rough bash script: for i in $(rados -p $POOLNAME ls); do rados rollback $i $SNAPNAME; done
[16:45] <linuxkidd> where $POOLNAME is the pool name, and $SNAPNAME is the snapshot name..
[16:45] <linuxkidd> I've not tested this.. so, I'd recommend testing on a non-important pool to see how it goes.
[16:46] * fouxm (~fouxm@185.23.92.11) Quit (Ping timeout: 480 seconds)
[16:46] <linuxkidd> rough cut #2: for i in $(rados -p $POOLNAME ls); do rados -p $POOLNAME rollback $i $SNAPNAME; done
[16:47] <linuxkidd> also, you can use 'cppool' to copy an existing pool, say before you roll back all the objects..
[16:48] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:48] <linuxkidd> not necessarily as elegant as a copy-on-write clone.. but.. at least you could have both versions (current / snapshot)
[16:50] * bloodice (blinker@50-89-11-53.res.bhn.net) has joined #ceph
[16:50] <bloodice> oyyy
[16:50] <hughsaunders> linuxkidd: thanks for looking into this for me
[16:50] <hughsaunders> linuxkidd: bash loop over object list only works for objects that haven't been removed since the snapshot was taken..
[16:50] <bloodice> when it says "create a rados gateway directory, does it mean create that directory on the machine hosting the deamon?
[16:51] <linuxkidd> micha_: I'm trying to understand your request re: public access...
[16:52] <linuxkidd> micha_: Do you mean, allowing s3 clients to access the radosgw without any client keys?
[16:52] <linuxkidd> micha_: or just allowing all client keys read access?
[16:53] <linuxkidd> bloodice: Can you provide what document you're referencing so I can see the context?
[16:53] <micha_> linuxkidd: whithout any keys e.g. webbrowser
[16:54] <bloodice> http://ceph.com/docs/master/radosgw/config/
[16:54] <bloodice> i am trying to setup a rados gateway with an S3 type configuration
[16:54] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[16:55] <bloodice> it also seems to me like they want the gateway installed on a monitor server
[16:58] <linuxkidd> bloodice: radosgw does not need to be installed on a monitor.. I'm looking over the doc now and hope to provide some additional details momentarily
[16:59] <linuxkidd> micha_: Trying to find a work around for you.. I see you emailed and got a response from Yehuda (assuming your last name is Krause), but didn't get resolution
[16:59] <bloodice> k
[16:59] * gkoch (~gkoch@38.86.161.178) has joined #ceph
[17:00] <micha_> linuxkidd: correct, thats me
[17:00] * mrjack_ (mrjack@office.smart-weblications.net) Quit (Ping timeout: 480 seconds)
[17:01] <hughsaunders> bloodice: the cookbooks can help with figuring out what needs to be done for an install. For example, the radosgw cookbook does create the directory you mentioned on the radosgw box. https://github.com/ceph/ceph-cookbooks/blob/master/recipes/radosgw.rb#L60
[17:02] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:02] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[17:02] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[17:03] <bloodice> good point
[17:05] * mnash (~chatzilla@vpn.expressionanalysis.com) Quit (Ping timeout: 480 seconds)
[17:06] * simulx (~simulx@vpn.expressionanalysis.com) Quit (Ping timeout: 480 seconds)
[17:08] <linuxkidd> micha_: I'm not finding a whole lot on this... My recommendation would be to reply to Yehuda and ask if he has a recommended way around this issue so that you can move forward. He's offline right now, or I'd hit him up in chat.
[17:08] <linuxkidd> micha_: wish I could be more help...
[17:10] * simulx (~simulx@66-194-114-178.static.twtelecom.net) has joined #ceph
[17:11] * fouxm_ (~fouxm@185.23.92.11) Quit (Remote host closed the connection)
[17:12] * fouxm (~fouxm@185.23.92.11) has joined #ceph
[17:12] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
[17:13] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) has joined #ceph
[17:18] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) has joined #ceph
[17:21] * sagelap (~sage@2600:1012:b005:ce83:6c38:5203:39d7:100f) Quit (Read error: Connection reset by peer)
[17:23] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) has joined #ceph
[17:23] * DarkAce-Z (~BillyMays@50.107.53.200) has joined #ceph
[17:25] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[17:27] * DarkAceZ (~BillyMays@50.107.53.200) Quit (Ping timeout: 480 seconds)
[17:28] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[17:31] * davidzlap (~Adium@cpe-23-242-31-175.socal.res.rr.com) has joined #ceph
[17:33] * mnash (~chatzilla@66-194-114-178.static.twtelecom.net) has joined #ceph
[17:33] <micha_> linuxkidd: i replied to him on friday, but was impatiend, and hoping someone in here could help me.
[17:34] <micha_> linuxkidd: thank you for your time
[17:36] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[17:37] <linuxkidd> micha_: no worries... I've also reached out to another developer for his input. We'll see how it goes.
[17:37] * DarkAce-Z is now known as DarkAceZ
[17:37] * markbby (~Adium@168.94.245.3) has joined #ceph
[17:40] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[17:41] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[17:48] * clayb (~kvirc@69.191.241.59) has joined #ceph
[17:49] * nwat (~textual@eduroam-255-104.ucsc.edu) has joined #ceph
[17:50] <clayb> Anyone have a reason not to add Ceph's GUID's to http://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs (from https://github.com/ceph/ceph/blob/master/src/ceph-disk#L61-L66)?
[17:51] <gkoch> I'm having trouble starting radosgw. The logs say "ERROR: FCGX_Accept_r returned -4" and then the service shuts down. Anyone have any insight to this error?
[17:55] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[18:00] * micha_ (~micha@hyper1.noris.net) Quit (Quit: leaving)
[18:00] * mattt_ (~textual@94.236.7.190) Quit (Quit: Computer has gone to sleep.)
[18:02] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:02] * xarses (~andreww@64-79-127-122.static.wiline.com) has joined #ceph
[18:03] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[18:05] * markbby (~Adium@168.94.245.3) has joined #ceph
[18:06] * angdraug (~angdraug@64-79-127-122.static.wiline.com) has joined #ceph
[18:06] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[18:07] * ScOut3R (~ScOut3R@212.96.46.212) Quit (Ping timeout: 480 seconds)
[18:07] * bandrus (~Adium@107.216.174.246) has joined #ceph
[18:09] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Read error: Operation timed out)
[18:10] * sleinen (~Adium@2001:620:0:25:2823:a5d:dc3e:2b7b) Quit (Quit: Leaving.)
[18:10] * sleinen (~Adium@130.59.94.252) has joined #ceph
[18:14] * aliguori (~anthony@74.202.210.82) has joined #ceph
[18:14] <bloodice> i am trying to follow those instructions still, and now its saying i should have a ceph.keyring file in my directory with all my other keyrings... those are there, but not the ceph.keyring
[18:14] * mkoderer (uid11949@id-11949.ealing.irccloud.com) has joined #ceph
[18:14] <bloodice> i am using cephx and the whole cluster is communicating fine, health is ok
[18:14] * andrewbogott (~andrewbog@50-93-251-174.fttp.usinternet.com) has joined #ceph
[18:15] * andrewbogott (~andrewbog@50-93-251-174.fttp.usinternet.com) Quit ()
[18:16] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:18] * sleinen (~Adium@130.59.94.252) Quit (Ping timeout: 480 seconds)
[18:19] * xmltok (~xmltok@216.103.134.250) has joined #ceph
[18:20] * ircolle (~Adium@2601:1:8380:2d9:59fb:3844:54cd:7e29) has joined #ceph
[18:20] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[18:21] <bloodice> oh nevermind, the keyring is in the deamon directories
[18:26] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[18:27] * ShaunR- (~ShaunR@staff.ndchost.com) has joined #ceph
[18:27] * ShaunR- (~ShaunR@staff.ndchost.com) has left #ceph
[18:27] * ShaunR (~ShaunR@staff.ndchost.com) Quit ()
[18:29] * ShaunR (~ShaunR@staff.ndchost.com) has joined #ceph
[18:32] * ircolle (~Adium@2601:1:8380:2d9:59fb:3844:54cd:7e29) Quit (Quit: Leaving.)
[18:32] <bloodice> ok.. like every client has the ceph.keyring... which client am i adding it too... for the rados...
[18:33] * ircolle (~Adium@2601:1:8380:2d9:85cf:ceb8:ba7:36a2) has joined #ceph
[18:34] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) has left #ceph
[18:34] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) has joined #ceph
[18:40] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[18:42] * ScOut3R (~scout3r@dsl51B69BF7.pool.t-online.hu) has joined #ceph
[18:44] * mschiff (~mschiff@85.182.236.82) has joined #ceph
[18:44] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[18:47] * BillK (~BillK-OFT@58-7-79-238.dyn.iinet.net.au) has joined #ceph
[18:48] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:50] * ponyofdeath (~vladi@cpe-75-80-165-117.san.res.rr.com) has joined #ceph
[18:51] * xdeller (~xdeller@91.218.144.129) Quit (Quit: Leaving)
[18:51] <gkoch> I am also getting this error when starting ceph-radosgw "libcurl doesn't support curl_multi_wait()". I installed curl 7.29 from the ceph repos.
[18:51] <gkoch> Any idea how to resolve that dep?
[18:52] * i_m (~ivan.miro@deibp9eh1--blueice1n2.emea.ibm.com) Quit (Ping timeout: 480 seconds)
[18:53] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[18:54] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[18:54] * Siva (~sivat@vpnnat.eglbp.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[18:54] * nwat (~textual@eduroam-255-104.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
[18:55] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[18:58] <mikedawson> sagewk: welcome back! Graphing rbd perf dumps this am. I understand aio_r and aio_w. Could you help me understand what aio_flush and aio_wr represent?
[18:59] <mikedawson> actually, just aio_flush (as it turns out aio_w doesn't exist and is actually aio_wr)
[19:00] * nwat (~textual@eduroam-255-104.ucsc.edu) has joined #ceph
[19:00] * gregsfortytwo (~Adium@2607:f298:a:607:cd79:6581:302c:5b69) Quit (Quit: Leaving.)
[19:00] * gregsfortytwo (~Adium@2607:f298:a:607:2185:9401:e460:77ba) has joined #ceph
[19:01] * mschiff (~mschiff@85.182.236.82) Quit (Read error: Connection reset by peer)
[19:02] * mschiff_ (~mschiff@85.182.236.82) has joined #ceph
[19:04] * ScOut3R (~scout3r@dsl51B69BF7.pool.t-online.hu) Quit ()
[19:04] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[19:06] * mschiff_ (~mschiff@85.182.236.82) Quit (Remote host closed the connection)
[19:06] * mschiff_ (~mschiff@85.182.236.82) has joined #ceph
[19:07] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[19:10] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[19:11] * symmcom (~symmcom@184.70.203.22) has joined #ceph
[19:12] <symmcom> Hello CEPH COmmunity! Could somebody tell me how can i tweak CEPH cluster such as increasing OP Thread, read sizes etc ?
[19:13] <sagewk> mikedawson: aio_flush is just the number of flushes requested.. these come from the fs doing its barrier/flush dance during commits/fsync
[19:13] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[19:13] * markbby (~Adium@168.94.245.3) has joined #ceph
[19:16] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[19:17] * gregmark (~Adium@68.87.42.115) Quit (Ping timeout: 480 seconds)
[19:19] * fouxm (~fouxm@185.23.92.11) Quit (Remote host closed the connection)
[19:20] * elder_ (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[19:20] * KevinPerks (~Adium@97.68.216.74) has joined #ceph
[19:20] <mikedawson> sagewk: http://www.gammacode.com/rbd-aio.jpg that shows 10s samples from the admin socket. The oddity at 13:00-13:03 was a reboot. But why do reads, writes, and flushes periodically spike?
[19:24] <bloodice> when using ceph-deploy where does it store the ceph.keyring?
[19:25] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[19:28] * Shmouel (~Sam@fny94-12-83-157-27-95.fbx.proxad.net) Quit (Read error: Connection reset by peer)
[19:30] <mikedawson> bloodice: I don't believe ceph-deploy makes a ceph.keyring file. All keyrings on my ceph-deploy machine are in /etc/ceph/. Perhaps you need ceph.client.admin.keyring
[19:30] * xmltok (~xmltok@216.103.134.250) Quit (Ping timeout: 480 seconds)
[19:31] * KevinPerks (~Adium@97.68.216.74) Quit (Ping timeout: 480 seconds)
[19:33] <bloodice> oh i should be specifying that?
[19:33] <bloodice> omg seriously
[19:35] * nwat (~textual@eduroam-255-104.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
[19:35] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[19:38] <bloodice> let me guess, i need to copy the rados keyring to the admin server then add the keyring
[19:38] <bloodice> add it to the admin keyring
[19:38] * nwat (~textual@eduroam-255-104.ucsc.edu) has joined #ceph
[19:40] <bloodice> bam
[19:40] <bloodice> wow you are the man
[19:40] * rongze_ (~rongze@117.79.232.204) Quit (Remote host closed the connection)
[19:40] <bloodice> thanks, i have been head banging for two hours on this
[19:48] * BillK (~BillK-OFT@58-7-79-238.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[19:50] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Read error: Operation timed out)
[19:53] * yanzheng (~zhyan@jfdmzpr06-ext.jf.intel.com) has joined #ceph
[19:55] <bloodice> well its starting the rados now, but the log says this then dies: error storing zone params
[19:55] <bloodice> whole line: error storing zone params: (1) Operation not permitted
[19:56] * nwat (~textual@eduroam-255-104.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
[19:57] * yanzheng (~zhyan@jfdmzpr06-ext.jf.intel.com) Quit (Remote host closed the connection)
[19:58] * sarob_ (~sarob@2601:9:7080:13a:8cb:f148:d7e6:8277) has joined #ceph
[20:01] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[20:01] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[20:01] * nwat (~textual@eduroam-255-104.ucsc.edu) has joined #ceph
[20:03] * cmdrk (~lincoln@c-24-12-206-91.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[20:06] * danieagle (~Daniel@179.176.54.5.dynamic.adsl.gvt.net.br) has joined #ceph
[20:06] * nwat (~textual@eduroam-255-104.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
[20:08] * mtanski (~mtanski@cpe-66-68-155-199.austin.res.rr.com) Quit (Quit: mtanski)
[20:09] * mtanski (~mtanski@cpe-66-68-155-199.austin.res.rr.com) has joined #ceph
[20:11] * rongze (~rongze@117.79.232.204) has joined #ceph
[20:11] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[20:13] * sleinen (~Adium@2001:620:0:25:9c16:c303:d60b:72ae) has joined #ceph
[20:14] * mtanski (~mtanski@cpe-66-68-155-199.austin.res.rr.com) Quit (Quit: mtanski)
[20:19] * mtanski (~mtanski@cpe-66-68-155-199.austin.res.rr.com) has joined #ceph
[20:19] <via> if i have a two node cluster, with a rep size of 2, my crush rule "step chooseleaf firstn 0 type host" works to get one osd from each host
[20:19] <via> but if i have two nodes and want a rep size of 2, where it chooses one osd on each host then a third one ranodmly, is there a rule to do that?
[20:20] <sagewk> via: not really (and that would be a rep size of 3)
[20:20] <via> yeah, sorry, i typod
[20:20] <via> so i can't use too chooseleaf lines or something?
[20:20] <via> two*
[20:21] * rongze (~rongze@117.79.232.204) Quit (Ping timeout: 480 seconds)
[20:21] <via> any way to write a rule to gaurantee that all three aren't on the same machine
[20:21] <sagewk> hmm.. may work actually. take root, chooseleaf -1 ..., emit, take root, chooseleaf 1 ..., emit
[20:22] <sagewk> actually, just do choose 2 hosts, choose 2 osds in each host, and set nrep=3
[20:22] <via> oh
[20:23] <via> okay, sorry my crush knowledge is rather limited, would that be two total step choose firstn lines?
[20:24] * haomaiwa_ (~haomaiwan@117.79.232.229) has joined #ceph
[20:25] * mtanski (~mtanski@cpe-66-68-155-199.austin.res.rr.com) Quit (Quit: mtanski)
[20:25] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[20:25] * dlan (~dennis@116.228.88.131) Quit (Read error: Operation timed out)
[20:26] <via> step take default, step chooseleaf firstn 2 type host ?
[20:26] * haomaiwang (~haomaiwan@211.155.113.217) Quit (Read error: Operation timed out)
[20:26] * Pedras (~Adium@216.207.42.132) has joined #ceph
[20:26] * Pedras1 (~Adium@216.207.42.132) has joined #ceph
[20:26] * Pedras (~Adium@216.207.42.132) Quit (Read error: Connection reset by peer)
[20:26] * ircolle (~Adium@2601:1:8380:2d9:85cf:ceb8:ba7:36a2) Quit (Read error: Connection reset by peer)
[20:28] * ircolle (~Adium@2601:1:8380:2d9:85cf:ceb8:ba7:36a2) has joined #ceph
[20:28] <bloodice> does anyone know how to setup the rados gateway? mine wont start and stay running after install
[20:28] * dlan (~dennis@116.228.88.131) has joined #ceph
[20:29] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[20:30] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) Quit (Read error: Connection reset by peer)
[20:30] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) has joined #ceph
[20:31] * sarob__ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[20:32] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) Quit ()
[20:32] * nwat (~textual@eduroam-255-104.ucsc.edu) has joined #ceph
[20:32] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) has joined #ceph
[20:33] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) Quit ()
[20:33] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) has joined #ceph
[20:35] * madkiss (~madkiss@2001:6f8:12c3:f00f:9461:c7d4:cf12:bdb5) has joined #ceph
[20:35] * dmick (~dmick@2607:f298:a:607:d171:6ed0:c7b:c593) has joined #ceph
[20:36] * sarob_ (~sarob@2601:9:7080:13a:8cb:f148:d7e6:8277) Quit (Ping timeout: 480 seconds)
[20:37] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[20:39] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:b866:9cfa:f37d:c363) Quit (Ping timeout: 480 seconds)
[20:41] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[20:44] * rongze (~rongze@117.79.232.204) has joined #ceph
[20:47] * ScOut3R (~scout3r@dsl51B69BF7.pool.t-online.hu) has joined #ceph
[20:49] * xdeller (~xdeller@95-31-29-125.broadband.corbina.ru) has joined #ceph
[20:51] <via> sagewk: is there any risk to the data i'm storing if i try changing the crush rules?
[20:52] <via> like, if its invalid, can i just change it back and not have lost anything?
[20:52] <gkoch> Hello, anyone here have insight in how to resolve this warning when starting radosgw: "WARNING: libcurl doesn't support curl_multi_wait()" using CURL 7.29 from ceph repo.
[20:53] <gkoch> bloodice: sounds like we have a similar issue. I have yet to get radosgw to start and stay up.
[20:53] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Read error: Operation timed out)
[20:53] * rongze (~rongze@117.79.232.204) Quit (Ping timeout: 480 seconds)
[20:56] <bloodice> lol yea
[20:58] <gkoch> bloodice: I'm using RHEL6, I downloaded the curl libraries, apache, and fastcgi from the ceph repo. When you start radosgw does it complain about curl_multi_wait?
[20:59] <bloodice> yes
[20:59] <bloodice> it does
[21:00] <gkoch> bloodice: what OS are you using?
[21:00] <bloodice> ubuntu
[21:01] <gkoch> What version of ceph and curl? I'm running dumpling 0.67.4 and curl 7.29. Not sure I can help as I have the same problem, but maybe it's something we can figure out together or someone has a tip.
[21:01] <bloodice> emp
[21:03] * xinxinsh (~xinxinsh@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[21:07] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[21:09] * xinxinsh (~xinxinsh@jfdmzpr04-ext.jf.intel.com) Quit (Quit: Leaving)
[21:10] * xinxinsh (~xinxinsh@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[21:10] <ponyofdeath> hi, for some reason i am in HEALTH_WARN after setting up a new cluster across two physical servers with 8osd's total 4 each
[21:15] * al-maisan (~al-maisan@94.236.7.190) has joined #ceph
[21:18] <pmatulis> ponyofdeath: what's the warning?
[21:20] <mikedawson> ponyofdeath: paste the output of 'ceph health detail'
[21:20] <mikedawson> and give us a link
[21:22] * sagelap (~sage@2607:f298:a:607:ea03:9aff:febc:4c23) Quit (Quit: Leaving.)
[21:24] * saturnine (~saturnine@ashvm.saturne.in) has joined #ceph
[21:24] * dmick (~dmick@2607:f298:a:607:d171:6ed0:c7b:c593) Quit (Quit: Leaving.)
[21:24] * sagewk (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) Quit (Quit: Leaving.)
[21:25] * sjustwork (~sam@2607:f298:a:607:d6be:d9ff:fe8e:1a8e) Quit (Quit: Leaving.)
[21:25] * Tamil (~tamil@38.122.20.226) Quit (Ping timeout: 480 seconds)
[21:25] <pmatulis> if a new client comes online how is it determined which monitor it will communicate with to get the cluster map? and secondly, is there any other client:monitor activity going on?
[21:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[21:26] * L2SHO (~L2SHO@office-nat.choopa.net) has joined #ceph
[21:27] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) has joined #ceph
[21:27] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[21:27] * ChanServ sets mode +v andreask
[21:28] <L2SHO> How do I reweight an OSD to a specific weight? "ceph osd tree" shows a weight of 1.82, but "ceph osd reweight" requires a weight between 0 and 1.
[21:29] <pmatulis> i didn't think OSDs could have a weight > 1
[21:30] <Gugge-47527> change the crush weight
[21:30] * scuttlemonkey changes topic to 'For CDS join #ceph-summit || CDS Firefly Schedule available! http://goo.gl/LOhq3O || Latest stable (v0.72.0 "Emperor") -- http://ceph.com/get || dev channel #ceph-devel '
[21:31] * joshd (~joshd@2607:f298:a:607:c80c:daea:cd25:cabf) Quit (Ping timeout: 480 seconds)
[21:32] <L2SHO> Gugge-47527, so "osd reweight" and "osd crush reweight" are different things?
[21:32] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[21:33] <L2SHO> Gugge-47527, and the output of "ceph osd tree" corresponds to the same number I would use in "ceph osd crush reweight <name> <weight>" ?
[21:33] <scuttlemonkey> under 30m until first session of CDS
[21:33] * sagelap (~sage@172.56.39.54) has joined #ceph
[21:33] <scuttlemonkey> /join #ceph-summit to participate
[21:36] * eternaleye (~eternaley@c-24-17-202-252.hsd1.wa.comcast.net) has joined #ceph
[21:37] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[21:46] <Gugge-47527> L2SHO: the output of ceph osd tree has a weight and a reweight column
[21:46] <Gugge-47527> L2SHO: one is changed with crush reweight, and one is changed with reweight
[21:48] <L2SHO> Gugge-47527, ok, that's kind of confusing. I ended up changing the number in the weight column with "crush reweight" but I still see all 1's in the weight column
[21:49] <L2SHO> are the weights for different aspects of the filesystem? I'm trying to adjust the PG placement because I have a full OSD
[21:51] * lofejndif (~lsqavnbok@tor-exit01.solidonetworks.com) has joined #ceph
[21:51] * rturk-away is now known as rturk
[21:51] <saturnine> Wondering if anyone can help provide some insight for a potential deployment.
[21:54] <saturnine> I'm looking into utilizing ceph rbd for VM storage as an alternative to ZFS backed storage.
[21:55] <saturnine> The biggest advantage we've discussed regarding ZFS is protection against bit rot.
[21:55] <mikedawson> saturnine: that will work
[21:56] <saturnine> How does Ceph compare to a ZFS solution in terms of protecting against bitrot.
[21:56] <saturnine> Or is there a viable solution to running Ceph on ZFS as opposed to XFS?
[21:56] <linuxkidd> not sure on ZFS, but Ceph performs regular data scrubs to find disparity in written data between copies of the same shard on different OSDs
[21:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[21:57] * rturk is now known as rturk-away
[21:57] * rturk-away is now known as rturk
[21:57] <saturnine> I don't see the protection as being a major issue, just trying to dispel some concerns.
[21:57] <saturnine> linuxkidd: If it does find a dispairity, how does it rectify it?
[21:57] <linuxkidd> XFS is the current recommended Ceph underlying Filesystem. BTRFS will be the recommended underlying FS once (long term) stability is achived
[21:57] <dmsimard> linuxkidd: Or ZFS :D
[21:57] <saturnine> Does it always base it on the primary object?
[21:58] <linuxkidd> It relies on human intervention to advise which shard is correct.
[21:58] <linuxkidd> then it re-replicates the human recommended shard to the other copies
[22:00] <linuxkidd> dmsimard: I've not heard anything about ZFS under Ceph... not saying it's not being looked at / worked on.. I've just not come across it personally yet
[22:01] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[22:01] * ChanServ sets mode +o elder
[22:01] * sarob__ (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[22:01] * Sodo (~Sodo@a88-113-108-239.elisa-laajakaista.fi) Quit (Ping timeout: 480 seconds)
[22:02] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[22:02] <saturnine> linuxkidd: How does the notification on that work?
[22:02] * alram (~alram@38.122.20.226) has joined #ceph
[22:02] <saturnine> So if it does detect an issue, how does it indicate that it has, besides just listing the cluster as unhealthy?
[22:03] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) Quit (Remote host closed the connection)
[22:04] <linuxkidd> ceph -s or ceph health will report inconsitent pgs
[22:04] <linuxkidd> Also, ceph -w ( or the main ceph log ) will show details on why it was marked inconsistent
[22:05] * erice (~erice@c-98-245-48-79.hsd1.co.comcast.net) Quit (Remote host closed the connection)
[22:05] <dmsimard> linuxkidd: Pretty sure ZFS is on the roadmap - motives being that it's production ready and supports parallel writing to fs and journal
[22:06] <dmsimard> linuxkidd: Firefly maybe ? Not sure where I read that
[22:06] <linuxkidd> dmsimard: sounds great... :)
[22:07] * erice__ is now known as erice
[22:08] <ircolle> liuxkidd - preliminary zfs back end support came out in Emperor
[22:10] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[22:10] <linuxkidd> Technically, it's been there since 0.67, but no special features of ZFS were implemented at that point..
[22:10] <linuxkidd> I'm not sure if Emperor added additional capabilities w/ ZFS, but there's nothing in the release notes stating so
[22:11] <linuxkidd> sry, that's a change since Dumpling... I mis-read the changelog
[22:12] <linuxkidd> so, I believe the current status is as stated in http://wiki.ceph.com/01Planning/02Blueprints/Emperor/osd%3A_ceph_on_zfs
[22:12] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[22:12] <linuxkidd> Supported just as XFS / ext4 are, but no special capabilities are enabled yet ( like btrfs is )
[22:14] * Tamil (~tamil@38.122.20.226) has joined #ceph
[22:14] <ponyofdeath> pmatulis, mikedawson http://paste.ubuntu.com/6475857
[22:15] <ponyofdeath> pmatulis, mikedawson http://paste.ubuntu.com/6475860
[22:15] <pmatulis> ponyofdeath: mystery solved
[22:15] <erice> You have to build ceph from source to get ZFS support. I have it running on a pair of 30 drive OSD nodes
[22:17] <ponyofdeath> pmatulis: how is the disk space low when I just prepared 9 new drives?
[22:17] <pmatulis> ponyofdeath: the monitor
[22:17] <ponyofdeath> ahh ok
[22:17] <ponyofdeath> i need more root disk space
[22:18] * sjustlaptop (~sam@2607:f298:a:697:d4dd:c449:4fd6:2f8e) has joined #ceph
[22:19] <mikedawson> ponyofdeath: you'll want a minimum of 3 monitors for production use. You also don't have all your OSDs up and in (osdmap e28: 8 osds: 3 up, 3 in)
[22:19] <ponyofdeath> so the mon needs quite a bit of disk space
[22:19] <ponyofdeath> mikedawson: i did active on all the disks
[22:19] <ponyofdeath> not sure how i can bring them in maby its the low disk space issue
[22:19] <pmatulis> ponyofdeath: how much disk space is it presently using?
[22:19] <ponyofdeath> /dev/sda2 5.4G 4.0G 1.1G 79% /
[22:20] <pmatulis> ponyofdeath: k, you were being conservative there :)
[22:20] <ponyofdeath> yesh
[22:20] <ponyofdeath> where is the dir that needs the space
[22:21] <ponyofdeath> should i just sym link /var/lib/ceph to more space
[22:21] <mikedawson> ponyofdeath: with root partitions that small, you'll also need to be wary of turning up the debugging levels on your ceph daemons. They can get quite verbose.
[22:21] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Read error: Operation timed out)
[22:21] <ponyofdeath> since i am using that ssd drive as the os drive / journal drive
[22:23] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[22:24] <pmatulis> ponyofdeath: 'sudo du -sh /*'
[22:25] <pmatulis> unanswered question -- if a new client comes online how is it determined which monitor it will communicate with to get the cluster map? and secondly, is there other client:monitor activity going on (beyond getting the cluster map)?
[22:26] * sjustlaptop (~sam@2607:f298:a:697:d4dd:c449:4fd6:2f8e) Quit (Ping timeout: 480 seconds)
[22:27] * sagewk (~sage@38.122.20.226) has joined #ceph
[22:28] * al-maisan (~al-maisan@94.236.7.190) Quit (Ping timeout: 480 seconds)
[22:29] <loicd> Mike Bryant or Li Wang?
[22:34] * yanzheng (~zhyan@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[22:34] * kraken (~kraken@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[22:34] * kraken (~kraken@gw.sepia.ceph.com) has joined #ceph
[22:36] * stj (~s@tully.csail.mit.edu) has joined #ceph
[22:37] <stj> hi all, I'm setting up a test ceph cluster, and I'm having some trouble adding osd's
[22:37] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[22:38] <stj> my first 4 nodes went up without trouble, but when I try to add a 5th, the 'sudo ceph-disk-activate' script hangs for 5 minutes and gives up
[22:38] <scuttlemonkey> stj: just fyi, the ceph developer summit is going on right now so answers may be slow in coming until later this afternoon
[22:38] <stj> ok, thanks for the info scuttlemonkey
[22:38] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) Quit (Quit: Leaving.)
[22:38] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) has joined #ceph
[22:39] * xcrracer (~xcrracer@fw-ext-v-1.kvcc.edu) Quit (Quit: quit)
[22:40] <stj> anywho, question is, does anyone know where I might look for more info? Nothing is being logged, as far as I can tell
[22:40] <erice> stj: start with /var/log/ceph on the client in case it got far enough along to log
[22:40] <scuttlemonkey> stj: things I would look at are 1) can you ping that machine in question from the mon(s) 2) is it resolvable via the shortname stored in the crush map
[22:41] <scuttlemonkey> then yeah, look at logs
[22:41] <scuttlemonkey> can also turn up logging with injectargs
[22:41] <stj> erice: yeah, nothing on the client so far
[22:41] <stj> scuttlemonkey: how do I tell which shortname is stored in the crushmap?
[22:41] <scuttlemonkey> stj: http://ceph.com/docs/master/rados/operations/crush-map/
[22:43] <stj> scuttlemonkey: 1) is definitely true, my hunch is that 2) is also true
[22:43] * mmgaggle (~kyle@cerebrum.dreamservers.com) has joined #ceph
[22:43] <scuttlemonkey> ok
[22:43] <stj> I see that ceph-deploy logs into the client, and it starts processes
[22:43] <stj> but the processes just seem to hang on the client
[22:44] <scuttlemonkey> can also look at http://ceph.com/docs/master/rados/troubleshooting/log-and-debug/
[22:44] <scuttlemonkey> turn up logging and watch
[22:44] <erice> stj: anything in dmesg on a hardware error?
[22:44] * sjustwork (~sam@2607:f298:a:607:38aa:d318:6f02:da9b) has joined #ceph
[22:45] <stj> not that I can see... I had freshly formatted the disks, and the last messages were from successfully mounting those
[22:45] * tarfik (~tarfik@aghy169.neoplus.adsl.tpnet.pl) has joined #ceph
[22:46] <stj> don't see any ominous looking errors earlier in the log either
[22:46] * rongze (~rongze@117.79.232.204) has joined #ceph
[22:46] <stj> what's strange, is that I can add new OSDs on other existing nodes in the cluster
[22:46] <stj> this script is only hanging on this new node
[22:47] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[22:47] <stj> which is identical hardware/disks and OS/kernel version
[22:48] <bloodice> gkoch: i have someone looking into the error, if it gets fixed, i will pass on the info to ya :)
[22:49] <erice> stj: I use "ceph osd tree" to see if it got partially added
[22:51] <stj> hmm, nothing partially added :/
[22:53] <stj> turned up logging on osds, but I still don't see anything getting logged after starting the ceph-deploy osd activate
[22:53] <stj> (the ceph-deploy osd prepare step works fine)
[22:54] <stj> the hanging client currently shows a python and /bin/sh process, owned by root, running the ceph-disk-activate script
[22:54] <stj> as well as a 'osd create' process
[22:54] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[22:55] * fireD (~fireD@93-139-165-73.adsl.net.t-com.hr) has joined #ceph
[22:56] <tarfik> Hi, I have 2 s3/ceph clusters active -> passive ( copy of objects ) with large bucket (17mln obj). On one cluster i have sharded this bucket into 256 parts, after this avg get time increased twice :-(. Do you have any idea why?
[22:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[22:56] * joshd (~joshd@2607:f298:a:607:354c:fce3:e8b:463) has joined #ceph
[22:57] * dmick (~dmick@38.122.20.226) has joined #ceph
[22:57] * rongze (~rongze@117.79.232.204) Quit (Ping timeout: 480 seconds)
[22:57] <erice> stj: Is this the first OSD on this node?
[22:58] * rturk is now known as rturk-away
[22:58] <stj> erice: yes
[22:59] * rturk-away is now known as rturk
[22:59] <tarfik> graph: https://www.dropbox.com/s/lbrpk6ias6r4459/avg_get_time.PNG
[23:00] * allsystemsarego (~allsystem@5-12-240-115.residential.rdsnet.ro) Quit (Quit: Leaving)
[23:01] * wwformat (~chatzilla@61.187.54.9) has joined #ceph
[23:01] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[23:01] <erice> stj: If it got through the prepare, it should have created a file system and a mount point. Do you see the mount point under /var/lib/ceph/osd/
[23:01] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[23:02] * nwat (~textual@eduroam-255-104.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz…)
[23:02] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[23:03] <stj> erice: strange... I don't. I'll investigate...
[23:03] * xiaoxi (~xiaoxi@192.102.204.38) has joined #ceph
[23:03] * lightspeed (~lightspee@2001:8b0:16e:1:216:eaff:fe59:4a3c) has joined #ceph
[23:03] <ponyofdeath> mikedawson, pmatulis ok now i have it reporting this http://paste.ubuntu.com/6476056
[23:03] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[23:03] <ponyofdeath> http://paste.ubuntu.com/6476057
[23:05] <mikedawson> ponyofdeath: that's better. now get "8 osds: 4 up, 4 in" to say " 8 osds: 8 up, 8 in" by starting the remaining osds. Look at the output of 'ceph osd tree' to see what osds are down and out
[23:06] <ponyofdeath> mikedawson: yeah its the ones on the local box
[23:06] <ponyofdeath> the one htat is running mon
[23:07] <dmsimard> Can anyone remind me what are the concerns about CephFS being better to run on very recent kernel versions ?
[23:09] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[23:10] * KindTwo (KindOne@h68.208.89.75.dynamic.ip.windstream.net) has joined #ceph
[23:10] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[23:10] * KindTwo is now known as KindOne
[23:11] <scuttlemonkey> dmsimard: more recent kernel is always better
[23:12] <dmsimard> scuttlemonkey: I know but wasn't there a specific reason ? My memory is failing
[23:12] <dmsimard> scuttlemonkey: I remember it being somewhat of a concern for me considering that kernel version would only hit the next ubuntu LTS
[23:13] * erice_ (~erice@c-98-245-48-79.hsd1.co.comcast.net) has joined #ceph
[23:13] * minchen (~minchen@202.197.9.8) has joined #ceph
[23:13] <scuttlemonkey> ahh
[23:13] <scuttlemonkey> yeah there was something but I also do not recall what
[23:13] <scuttlemonkey> gregsfortytwo would know, but he is involved in the developer summit at present
[23:14] <gregsfortytwo> nothing fs-specific in my head, but you might have been worrying about support for hashpspool or crush tunables
[23:14] * rturk is now known as rturk-away
[23:14] <pmatulis> how does a client know which monitor to contact?
[23:14] * rturk-away is now known as rturk
[23:15] <mikedawson> pmatulis: clients refer to ceph.conf
[23:15] <pmatulis> mikedawson: ok, but there can be several there
[23:16] <pmatulis> mikedawson: and i would assume clients use the monitor map b/c not all monitors are listed in ceph.conf
[23:16] <pmatulis> (not all monitors *need* to be listed)
[23:16] <mikedawson> dmsimard: http://www.mail-archive.com/ceph-users@lists.ceph.com/msg05750.html
[23:17] <dmsimard> Ah, yeah, that's kind of a problem.. right, the filesystem disappearing
[23:17] <dmsimard> :D
[23:18] * Underbyte (~jerrad@pat-global.macpractice.net) Quit (Remote host closed the connection)
[23:18] <Pauline> Does anybody know if the ceph journal is clean when the osd's shut cleanly? Just asking if I can repartition the SSDs without ruining my setup...
[23:18] <L2SHO> can someone explain the difference between "osd reweight" and "crush reweight" ?
[23:18] <dmsimard> Ah so this issue doesn't occur if you use fuse.ceph ?
[23:20] * sjm (~sjm@gzac10-107-1.nje.twosigma.com) has left #ceph
[23:21] <mikedawson> dmsimard: fuse uses whatever version of the ceph packages you have installed without the need to update kernel (so in that case, I believe Yan is assuming you have a recent Ceph installed that includes the patch)
[23:21] * elder (~elder@c-71-195-31-37.hsd1.mn.comcast.net) has joined #ceph
[23:21] * ChanServ sets mode +o elder
[23:25] <ircolle> elder! You should stop by #ceph-summit
[23:25] <elder> Hey, maybe I will!
[23:26] * sjustwork (~sam@2607:f298:a:607:38aa:d318:6f02:da9b) Quit (Quit: Leaving.)
[23:26] * dmick (~dmick@38.122.20.226) Quit (Quit: Leaving.)
[23:26] * joshd (~joshd@2607:f298:a:607:354c:fce3:e8b:463) Quit (Quit: Leaving.)
[23:26] * xinxinsh (~xinxinsh@jfdmzpr04-ext.jf.intel.com) Quit (Remote host closed the connection)
[23:26] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[23:26] <elder> Oh great, now everybody's quitting.
[23:26] <ircolle> ha
[23:27] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)
[23:28] * Tamil (~tamil@38.122.20.226) Quit (Ping timeout: 480 seconds)
[23:29] * dxd828 (~dxd828@host-92-24-127-29.ppp.as43234.net) has joined #ceph
[23:30] * sagewk1 (~sage@2607:f298:a:607:219:b9ff:fe40:55fe) has joined #ceph
[23:30] * Tamil (~tamil@38.122.20.226) has joined #ceph
[23:30] * rendar (~s@host100-181-dynamic.3-87-r.retail.telecomitalia.it) Quit ()
[23:30] * joshd (~joshd@2607:f298:a:607:354c:fce3:e8b:463) has joined #ceph
[23:30] * dmick (~dmick@38.122.20.226) has joined #ceph
[23:31] * avijaira (~avijaira@c-24-6-37-207.hsd1.ca.comcast.net) has joined #ceph
[23:31] * sagewk (~sage@38.122.20.226) Quit (Read error: Connection reset by peer)
[23:32] * Tamil (~tamil@38.122.20.226) has left #ceph
[23:32] * Tamil (~tamil@38.122.20.226) has joined #ceph
[23:34] * lofejndif (~lsqavnbok@9YYAAG91X.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[23:36] * jonas (~jonas@188-183-5-254-static.dk.customer.tdc.net) has joined #ceph
[23:36] * jonas is now known as jo0nas
[23:40] * mattt_ (~textual@cpc25-rdng20-2-0-cust162.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[23:43] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[23:43] * Cube (~Cube@66-87-67-172.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[23:44] * john_barbee (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[23:45] * simpleirc (~simpleirc@61.187.54.9) has joined #ceph
[23:46] * al-maisan (~al-maisan@86.188.131.84) has joined #ceph
[23:46] * Cube (~Cube@66-87-65-213.pools.spcsdns.net) has joined #ceph
[23:47] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[23:47] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Remote host closed the connection)
[23:50] * JoeGruher (~JoeGruher@jfdmzpr04-ext.jf.intel.com) has joined #ceph
[23:50] * rongze (~rongze@117.79.232.204) has joined #ceph
[23:52] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[23:56] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[23:56] * simpleirc (~simpleirc@61.187.54.9) Quit (Remote host closed the connection)
[23:57] * sleinen (~Adium@2001:620:0:25:9c16:c303:d60b:72ae) Quit (Quit: Leaving.)
[23:57] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[23:58] * xiaoxi (~xiaoxi@192.102.204.38) has left #ceph
[23:58] * rongze (~rongze@117.79.232.204) Quit (Ping timeout: 480 seconds)
[23:58] * Siva (~sivat@117.192.37.147) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.