#ceph IRC Log

Index

IRC Log for 2015-06-22

Timestamps are in GMT/BST.

[0:00] * linjan (~linjan@213.8.240.146) Quit (Ping timeout: 480 seconds)
[0:06] * funnel_ (~funnel@81.4.123.134) has joined #ceph
[0:07] * alexxy[home] (~alexxy@79.173.81.171) has joined #ceph
[0:07] * Knorrie (knorrie@yoshi.kantoor.mendix.nl) Quit (Ping timeout: 480 seconds)
[0:07] * funnel (~funnel@81.4.123.134) Quit (Ping timeout: 480 seconds)
[0:07] * Knorrie (knorrie@yoshi.kantoor.mendix.nl) has joined #ceph
[0:07] * funnel_ is now known as funnel
[0:08] * alexxy (~alexxy@2001:470:1f14:106::2) Quit (Ping timeout: 480 seconds)
[0:15] * georgem (~Adium@23.91.150.96) has joined #ceph
[0:15] * madkiss1 (~madkiss@2001:6f8:12c3:f00f:9837:7492:a792:20f2) has joined #ceph
[0:17] * zigo_ (~quassel@gplhost-3-pt.tunnel.tserv18.fra1.ipv6.he.net) has joined #ceph
[0:19] * dneary (~dneary@pool-96-252-45-212.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[0:19] * fred`` (fred@earthli.ng) Quit (Ping timeout: 480 seconds)
[0:19] * foxxx0 (~fox@2a01:4f8:200:216b::2) Quit (Ping timeout: 480 seconds)
[0:20] * Aal (~mollstam@8Q4AABQ4F.tor-irc.dnsbl.oftc.net) Quit ()
[0:20] * madkiss (~madkiss@2001:6f8:12c3:f00f:ac2f:cfe8:bfb6:b12) Quit (Ping timeout: 480 seconds)
[0:20] * mlausch (~mlausch@2001:8d8:1fe:7:1a7:a58e:d6bd:78) Quit (Ping timeout: 480 seconds)
[0:20] * Nacer (~Nacer@2001:41d0:fe82:7200:c85c:fe50:108d:2bc6) Quit (Ping timeout: 480 seconds)
[0:20] * murmur_ (~murmur@zeeb.org) Quit (Ping timeout: 480 seconds)
[0:20] * zigo (~quassel@gplhost-3-pt.tunnel.tserv18.fra1.ipv6.he.net) Quit (Ping timeout: 480 seconds)
[0:20] * kapil (~ksharma@2620:113:80c0:5::2222) Quit (Ping timeout: 480 seconds)
[0:21] * Nacer (~Nacer@2001:41d0:fe82:7200:c85c:fe50:108d:2bc6) has joined #ceph
[0:21] * foxxx0 (~fox@nano-srv.net) has joined #ceph
[0:22] * mlausch (~mlausch@2001:8d8:1fe:7:1a7:a58e:d6bd:78) has joined #ceph
[0:23] * kapil (~ksharma@2620:113:80c0:5::2222) has joined #ceph
[0:23] * MentalRay (~MRay@107.171.161.165) has joined #ceph
[0:24] * Sysadmin88 (~IceChat77@2.124.164.69) has joined #ceph
[0:24] * fred`` (fred@earthli.ng) has joined #ceph
[0:27] * Meths (~meths@2.25.223.20) Quit (Quit: leaving)
[0:27] * dneary (~dneary@pool-96-252-45-212.bstnma.fios.verizon.net) has joined #ceph
[0:31] * murmur (~murmur@zeeb.org) has joined #ceph
[0:31] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[0:32] * Nacer (~Nacer@2001:41d0:fe82:7200:c85c:fe50:108d:2bc6) Quit (Remote host closed the connection)
[0:35] * Meths (~meths@2.25.223.20) has joined #ceph
[0:42] * nsoffer (~nsoffer@bzq-79-180-80-9.red.bezeqint.net) Quit (Ping timeout: 480 seconds)
[0:48] * georgem (~Adium@23.91.150.96) Quit (Quit: Leaving.)
[0:52] * danieagle (~Daniel@201-1-132-196.dsl.telesp.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[1:08] * OutOfNoWhere (~rpb@199.68.195.101) has joined #ceph
[1:10] * Concubidated (~Adium@71.21.5.251) Quit (Quit: Leaving.)
[1:20] * MentalRay (~MRay@107.171.161.165) Quit (Quit: This computer has gone to sleep)
[1:27] * MentalRay (~MRay@107.171.161.165) has joined #ceph
[1:37] * MentalRay (~MRay@107.171.161.165) Quit (Quit: This computer has gone to sleep)
[1:37] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[1:50] * oms101 (~oms101@p20030057EA754E00C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:51] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Ping timeout: 480 seconds)
[1:58] * emik0_ (~emik0@77.41.112.207) Quit (Remote host closed the connection)
[1:59] * oms101 (~oms101@p20030057EA07C400C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:01] * DV_ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[2:02] * dneary (~dneary@pool-96-252-45-212.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[2:02] * MentalRay (~MRay@107.171.161.165) has joined #ceph
[2:17] * Debesis (~0x@143.252.117.89.static.mezon.lt) Quit (Ping timeout: 480 seconds)
[2:24] * squizzi (~squizzi@nat-pool-rdu-t.redhat.com) Quit (Ping timeout: 480 seconds)
[2:25] * MentalRay (~MRay@107.171.161.165) Quit (Quit: This computer has gone to sleep)
[2:30] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[2:31] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit ()
[2:53] * MrHeavy_ (~MrHeavy@pool-108-54-190-117.nycmny.fios.verizon.net) Quit (Quit: Leaving)
[2:59] * MentalRay (~MRay@107.171.161.165) has joined #ceph
[2:59] * MentalRay (~MRay@107.171.161.165) Quit ()
[3:12] * MentalRay (~MRay@107.171.161.165) has joined #ceph
[3:23] * burley (~khemicals@cpe-98-28-239-78.cinci.res.rr.com) has joined #ceph
[3:49] * MentalRay (~MRay@107.171.161.165) Quit (Quit: This computer has gone to sleep)
[3:55] * derjohn_mobi (~aj@x590d237b.dyn.telefonica.de) has joined #ceph
[3:57] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[4:03] * aj__ (~aj@x590e8074.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[4:06] * ichavero_ is now known as imcsk8
[4:48] * shang (~ShangWu@175.41.48.77) has joined #ceph
[4:56] * MentalRay (~MRay@107.171.161.165) has joined #ceph
[5:02] * MentalRay (~MRay@107.171.161.165) Quit (Quit: This computer has gone to sleep)
[5:17] * nolan_ (~nolan@phong.sigbus.net) has joined #ceph
[5:18] * nolan (~nolan@2001:470:1:41:a800:ff:fe3e:ad08) Quit (Ping timeout: 480 seconds)
[5:18] * nolan_ is now known as nolan
[5:21] * Scrin (~K3NT1S_aw@5.175.204.133) has joined #ceph
[5:40] * Vacuum__ (~Vacuum@88.130.201.240) has joined #ceph
[5:47] * Vacuum_ (~Vacuum@88.130.221.125) Quit (Ping timeout: 480 seconds)
[5:51] * Scrin (~K3NT1S_aw@7R2AABV7T.tor-irc.dnsbl.oftc.net) Quit ()
[6:03] <m0zes> so, I found out my coworker created all 432 of our osds with an inode size of 256, rather than 2048. are the performance implications serious enough for me to recreate them all individually? we're using cephfs and rbd on top of xfs osds.
[6:03] <m0zes> and ceph version 0.94.2
[6:09] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[6:10] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[6:11] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[6:13] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[6:14] * shylesh__ (~shylesh@121.244.87.124) has joined #ceph
[6:14] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[6:17] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[6:21] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[6:26] * yguang11 (~yguang11@12.31.82.125) has joined #ceph
[6:27] * calvinx (~calvin@101.100.172.246) has joined #ceph
[6:34] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) has joined #ceph
[6:35] * sleinen1 (~Adium@2001:620:0:82::102) has joined #ceph
[6:42] * sleinen (~Adium@84-72-160-233.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[6:49] * shylesh (~shylesh@121.244.87.124) has joined #ceph
[6:52] * yguang11_ (~yguang11@2001:4998:effd:7804::1065) has joined #ceph
[6:58] * yguang11 (~yguang11@12.31.82.125) Quit (Ping timeout: 480 seconds)
[7:01] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[7:05] * vikhyat (~vumrao@121.244.87.116) has joined #ceph
[7:12] * OutOfNoWhere (~rpb@199.68.195.101) Quit (Ping timeout: 480 seconds)
[7:17] * sjm (~sjm@49.32.0.217) has joined #ceph
[7:17] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:21] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[7:23] * yguang11_ (~yguang11@2001:4998:effd:7804::1065) Quit (Ping timeout: 480 seconds)
[7:24] * yguang11 (~yguang11@2001:4998:effd:7804::1065) has joined #ceph
[7:26] * karnan (~karnan@121.244.87.117) has joined #ceph
[7:32] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:48] * yguang11 (~yguang11@2001:4998:effd:7804::1065) Quit (Ping timeout: 480 seconds)
[7:49] * tobiash (~quassel@mail.bmw-carit.de) has joined #ceph
[8:00] * nsoffer (~nsoffer@bzq-84-111-112-230.cablep.bezeqint.net) has joined #ceph
[8:09] * Nacer (~Nacer@2001:41d0:fe82:7200:432:fbb4:5c5:4216) has joined #ceph
[8:12] * doppelgrau (~doppelgra@5.147.18.69) has joined #ceph
[8:14] * doppelgrau (~doppelgra@5.147.18.69) Quit ()
[8:18] * overclk (~overclk@121.244.87.117) has joined #ceph
[8:18] * shohn (~shohn@dslb-088-074-070-032.088.074.pools.vodafone-ip.de) has joined #ceph
[8:24] * oro (~oro@178-164-140-166.pool.digikabel.hu) has joined #ceph
[8:25] * Sysadmin88 (~IceChat77@2.124.164.69) Quit (Quit: If you think nobody cares, try missing a few payments)
[8:29] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[8:29] * overclk (~overclk@121.244.87.117) has joined #ceph
[8:35] * Nacer (~Nacer@2001:41d0:fe82:7200:432:fbb4:5c5:4216) Quit (Remote host closed the connection)
[8:35] * amote (~amote@121.244.87.116) has joined #ceph
[8:37] * amote (~amote@121.244.87.116) Quit (Remote host closed the connection)
[8:37] * amote (~amote@121.244.87.116) has joined #ceph
[8:40] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) has joined #ceph
[8:41] * T1w (~jens@node3.survey-it.dk) has joined #ceph
[8:46] * shohn (~shohn@dslb-088-074-070-032.088.074.pools.vodafone-ip.de) Quit (Read error: Connection reset by peer)
[8:48] * shohn (~shohn@dslb-088-074-070-032.088.074.pools.vodafone-ip.de) has joined #ceph
[8:48] * nardial (~ls@dslb-178-011-179-229.178.011.pools.vodafone-ip.de) has joined #ceph
[8:48] * shohn (~shohn@dslb-088-074-070-032.088.074.pools.vodafone-ip.de) Quit (Read error: Connection reset by peer)
[8:49] * sleinen1 (~Adium@2001:620:0:82::102) Quit (Ping timeout: 480 seconds)
[8:49] * shohn (~shohn@dslb-088-074-070-032.088.074.pools.vodafone-ip.de) has joined #ceph
[8:50] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) has joined #ceph
[8:50] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) has joined #ceph
[8:52] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Quit: Ex-Chat)
[8:54] * wicope (~wicope@0001fd8a.user.oftc.net) has joined #ceph
[8:55] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[8:58] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[8:59] * badone (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[9:01] <Be-El> hi
[9:08] * shohn (~shohn@dslb-088-074-070-032.088.074.pools.vodafone-ip.de) Quit (Read error: Connection reset by peer)
[9:09] * shohn (~shohn@dslb-088-074-070-032.088.074.pools.vodafone-ip.de) has joined #ceph
[9:10] * shohn (~shohn@dslb-088-074-070-032.088.074.pools.vodafone-ip.de) Quit (Read error: Connection reset by peer)
[9:10] * shohn (~shohn@dslb-088-074-070-032.088.074.pools.vodafone-ip.de) has joined #ceph
[9:13] * shohn (~shohn@dslb-088-074-070-032.088.074.pools.vodafone-ip.de) Quit (Read error: Connection reset by peer)
[9:14] * shohn (~shohn@dslb-088-074-070-032.088.074.pools.vodafone-ip.de) has joined #ceph
[9:19] * shohn1 (~shohn@dslb-188-102-008-068.188.102.pools.vodafone-ip.de) has joined #ceph
[9:20] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[9:20] * linjan (~linjan@195.110.41.9) has joined #ceph
[9:21] * nsoffer (~nsoffer@bzq-84-111-112-230.cablep.bezeqint.net) Quit (Ping timeout: 480 seconds)
[9:21] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) has joined #ceph
[9:23] <MrBy> hi, where does the actual planning takes place? https://wiki.ceph.com/Planning/ or http://tracker.ceph.com/projects/ceph/wiki/Planning ... ? both seems to be a bit outdated
[9:23] * shohn (~shohn@dslb-088-074-070-032.088.074.pools.vodafone-ip.de) Quit (Ping timeout: 480 seconds)
[9:23] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[9:26] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[9:27] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[9:29] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[9:29] * analbeard (~shw@support.memset.com) has joined #ceph
[9:31] * derjohn_mobi (~aj@x590d237b.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[9:32] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[9:33] * badone_ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[9:35] * dgurtner (~dgurtner@178.197.231.10) has joined #ceph
[9:38] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[9:39] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[9:42] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[9:42] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[9:44] * zacbri (~zacbri@glo44-5-88-164-16-77.fbx.proxad.net) Quit (Quit: Leaving)
[9:44] * vikhyat (~vumrao@121.244.87.116) Quit (Quit: Leaving)
[9:46] * emik0 (~emik0@91.241.13.28) Quit (Remote host closed the connection)
[9:46] * emik0 (~emik0@91.241.13.28) has joined #ceph
[9:49] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[9:55] * oblu (~o@62.109.134.112) Quit (Remote host closed the connection)
[9:56] * haomaiwang (~haomaiwan@114.111.166.250) Quit (Remote host closed the connection)
[9:58] * derjohn_mobi (~aj@2001:6f8:1337:0:5d8f:e91b:db4:fe1a) has joined #ceph
[10:01] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) has joined #ceph
[10:07] * Concubidated (~Adium@23.91.33.7) has joined #ceph
[10:08] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) Quit (Quit: Ex-Chat)
[10:08] * owasserm (~owasserm@52D9864F.cm-11-1c.dynamic.ziggo.nl) has joined #ceph
[10:10] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:11] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[10:14] * derjohn_mobi (~aj@2001:6f8:1337:0:5d8f:e91b:db4:fe1a) Quit (Ping timeout: 480 seconds)
[10:14] * SteveCap1er is now known as SteveCapper
[10:22] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[10:22] * derjohn_mobi (~aj@2001:6f8:1337:0:f563:ad4b:faaa:495f) has joined #ceph
[10:22] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:25] * fdmanana__ (~fdmanana@bl14-141-55.dsl.telepac.pt) has joined #ceph
[10:26] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[10:27] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:35] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[10:38] * rakesh (~rakesh@121.244.87.117) has joined #ceph
[10:40] * sjm (~sjm@49.32.0.217) Quit (Read error: Connection reset by peer)
[10:40] * rakesh (~rakesh@121.244.87.117) Quit ()
[10:40] * sjm (~sjm@49.32.0.217) has joined #ceph
[10:40] * rakesh (~rakesh@121.244.87.117) has joined #ceph
[10:41] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:43] * Debesis (0x@143.252.117.89.static.mezon.lt) has joined #ceph
[10:48] * masterpe_ is now known as masterpe
[10:54] * Hemanth (~Hemanth@121.244.87.117) has joined #ceph
[10:54] * An_T_oine (~Antoine@192.93.37.4) has joined #ceph
[10:56] * linjan (~linjan@109.253.82.215) has joined #ceph
[10:57] * emik0 (~emik0@91.241.13.28) Quit (Remote host closed the connection)
[10:59] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) has joined #ceph
[11:01] * sankarshan (~sankarsha@183.87.39.242) has joined #ceph
[11:02] * dgurtner (~dgurtner@178.197.231.10) Quit (Quit: leaving)
[11:05] <treenerd> Hi, is there a way to restore the admin permission in ceph if I have overwritten the admin with wrong permissions
[11:05] <treenerd> would it be possible if I disable cephx for example?
[11:07] <boolman> Hi, Is it possible to test journal on RAM?
[11:07] <treenerd> I've done ceph auth caps client.admin mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=one_standard' And I recognized that I used the admin ID instead of the id that i would like to use
[11:11] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) has joined #ceph
[11:14] <Be-El> treenerd: i'm not sure whether disabling cephx helps. cephx is for authentication; the authorization information (e.g. caps) are not stored in the key information on the clients
[11:14] * linjan (~linjan@109.253.82.215) Quit (Read error: Connection reset by peer)
[11:15] <Be-El> boolman: i've not done it before, but you can try to stop an osd, flush its journal, and change the journal symlink to a ramdisk / a file in a ramdisk, and recreate the journal
[11:15] <treenerd> Be-El; okay so what can I do?
[11:15] <boolman> Be-El: i tried, it didnt work =/
[11:15] <boolman> 2015-06-22 11:12:37.191558 7fe39d354780 -1 filestore(/var/lib/ceph/osd/ceph-28) mkjournal error creating journal on /var/lib/ceph/ramdisk/journal-28: (22) Invalid argument
[11:15] <boolman> 2015-06-22 11:12:37.191582 7fe39d354780 -1 ** ERROR: error creating fresh journal /var/lib/ceph/ramdisk/journal-28 for object store /var/lib/ceph/osd/ceph-28: (22) Invalid argument
[11:16] * sleinen (~Adium@2001:620:0:2d:7ed1:c3ff:fedc:3223) Quit (Quit: Leaving.)
[11:16] <Be-El> boolman: what exactly does the journal symlink point to?
[11:16] <boolman> tmpfs on /var/lib/ceph/ramdisk type tmpfs (rw,size=8192M)
[11:17] <Be-El> treenerd: sorry, i have no clue...the authentication and authorization information are stored in the mon database.....maybe there's a way to manipulate them
[11:17] <Be-El> treenerd: you may wait until some of the developers become active on the channel and ak them
[11:18] <Be-El> boolman: /var/lib/ceph/ramdisk/journal-28 is a different path
[11:18] <treenerd> okay thank you for the hint
[11:18] <boolman> Be-El: yes, so?
[11:18] <Be-El> boolman: that's not the ramdisk
[11:19] <Be-El> boolman: or did you create a file in the ramdisk?
[11:19] <boolman> journal-28 is the journal file
[11:19] <boolman> created by ceph-osd
[11:22] * dgurtner (~dgurtner@178.197.231.10) has joined #ceph
[11:23] <Be-El> boolman: did you try to create the journal manually with ceph-osd --mkjournal?
[11:23] <boolman> yes
[11:24] <Be-El> boolman: then you'll probably have to wait until one of the developers show up.
[11:24] * sleinen (~Adium@130.59.94.88) has joined #ceph
[11:24] <boolman> Be-El: ok, but it should be possible?
[11:24] <Be-El> boolman: i think so
[11:25] <Be-El> boolman: but that's not an authorative anser ;)
[11:26] * sleinen1 (~Adium@2001:620:0:82::10c) has joined #ceph
[11:26] <boolman> here is my command sequence: ceph osd set noout ; stop ceph-osd id=28 ; mount ramdisk ; change config, ceph-osd -i 28 --flush-journal ; ceph-osd -i 28 --mkjournal
[11:26] <boolman> flush first, then change config*
[11:28] <Be-El> boolman: that's exactly how i would do it
[11:29] <boolman> might be a stupid-proof thing, to not allow journal on ramdisk =)
[11:30] <Be-El> that may be the case. it is not a good idea
[11:30] <boolman> i have a lab-cluster with 30 osd's and would like to find out if my journal disk's is my bottleneck
[11:31] <boolman> after adding 15 osd's my performance was cut in half
[11:31] <Be-El> did you test the journal drive performance with respect to O_DSYNC?
[11:32] * sleinen (~Adium@130.59.94.88) Quit (Ping timeout: 480 seconds)
[11:32] * calvinx (~calvin@101.100.172.246) Quit (Quit: calvinx)
[11:34] <boolman> hm, only 20MB/s on the newly added nodes
[11:34] * sleinen1 (~Adium@2001:620:0:82::10c) Quit (Ping timeout: 480 seconds)
[11:35] <An_T_oine> in produciton environment, can we install MON on VM (vmware) ?
[11:38] * sleinen1 (~Adium@2001:620:0:82::105) has joined #ceph
[11:40] * elder (~elder@c-24-245-18-91.hsd1.mn.comcast.net) Quit (Quit: Leaving)
[11:46] * gford (~fford@p509901f2.dip0.t-ipconnect.de) has joined #ceph
[11:49] * oro (~oro@178-164-140-166.pool.digikabel.hu) Quit (Ping timeout: 480 seconds)
[11:50] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[12:04] <treenerd> No one who has an ides how to reset client.admin permissions?
[12:06] * rakesh (~rakesh@121.244.87.117) Quit (Quit: Leaving)
[12:14] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) has joined #ceph
[12:16] * sjm (~sjm@49.32.0.217) Quit (Ping timeout: 480 seconds)
[12:16] * linjan (~linjan@46.210.217.98) has joined #ceph
[12:22] * sjm (~sjm@49.32.0.217) has joined #ceph
[12:26] * ngoswami (~ngoswami@121.244.87.116) has joined #ceph
[12:30] * arbrandes (~arbrandes@189.78.72.151) has joined #ceph
[12:40] * sleinen (~Adium@macsl.switch.ch) has joined #ceph
[12:41] * linjan (~linjan@46.210.217.98) Quit (Ping timeout: 480 seconds)
[12:41] * capri_oner (~capri@212.218.127.222) Quit (Read error: Connection reset by peer)
[12:42] <treenerd> So what I has find out was; there is no tool for resetting permissions of client.admin (superuser) if you have accidently overwritten it.
[12:42] <treenerd> So reinstall a whole cluster
[12:42] * ninkotech_ (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[12:42] * An_T_oine (~Antoine@192.93.37.4) Quit (Ping timeout: 480 seconds)
[12:42] * ninkotech (~duplo@static-84-242-87-186.net.upcbroadband.cz) Quit (Read error: Connection reset by peer)
[12:42] * smerz (~ircircirc@37.74.194.90) has joined #ceph
[12:43] <treenerd> not nice, but its a staging cluster for doing tests in preproduction.
[12:43] <treenerd> So it is possible to reinstall
[12:44] * sleinen1 (~Adium@2001:620:0:82::105) Quit (Read error: Connection reset by peer)
[12:49] * ade (~abradshaw@tmo-113-122.customers.d1-online.com) has joined #ceph
[12:49] * cloud_vision (~cloud_vis@bzq-79-180-33-186.red.bezeqint.net) Quit (Read error: Connection reset by peer)
[12:50] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[12:52] * shang (~ShangWu@175.41.48.77) has joined #ceph
[12:53] * Hemanth (~Hemanth@121.244.87.117) Quit (Ping timeout: 480 seconds)
[13:06] * oro (~oro@84-236-17-104.pool.digikabel.hu) has joined #ceph
[13:07] * karnan (~karnan@121.244.87.117) Quit (Remote host closed the connection)
[13:08] * linjan (~linjan@46.210.246.78) has joined #ceph
[13:11] * nardial (~ls@dslb-178-011-179-229.178.011.pools.vodafone-ip.de) Quit (Quit: Leaving)
[13:23] * cdelatte (~cdelatte@cpe-172-72-105-98.carolina.res.rr.com) has joined #ceph
[13:23] * finster (~finster@2a01:4f8:d15:1000::2) has joined #ceph
[13:31] * yanzheng (~zhyan@182.139.20.134) has joined #ceph
[13:33] * RomeroJnr (~h0m3r@hosd.leaseweb.net) has joined #ceph
[13:45] * dugravot6 (~dugravot6@dn-infra-04.lionnois.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[13:47] * oblu (~o@62.109.134.112) has joined #ceph
[13:50] * rdas (~rdas@121.244.87.116) Quit (Remote host closed the connection)
[13:53] * rdas (~rdas@121.244.87.116) has joined #ceph
[13:55] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) Quit (Ping timeout: 480 seconds)
[13:57] * dugravot6 (~dugravot6@nat-persul-montet.wifi.univ-lorraine.fr) has joined #ceph
[14:02] * shang (~ShangWu@175.41.48.77) Quit (Quit: Ex-Chat)
[14:06] * linjan (~linjan@46.210.246.78) Quit (Ping timeout: 480 seconds)
[14:06] * vbellur (~vijay@107-1-123-195-ip-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[14:09] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:11] * ganders (~root@190.2.42.21) has joined #ceph
[14:15] * bene (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[14:17] * sleinen (~Adium@macsl.switch.ch) Quit (Ping timeout: 480 seconds)
[14:18] * garphy`aw is now known as garphy
[14:19] * linjan (~linjan@195.110.41.9) has joined #ceph
[14:19] * jdillaman (~jdillaman@pool-108-18-97-82.washdc.fios.verizon.net) has joined #ceph
[14:27] * sleinen (~Adium@2001:620:0:82::105) has joined #ceph
[14:30] * kawa2014 (~kawa@89.184.114.246) Quit (Ping timeout: 480 seconds)
[14:31] * jeff (~oftc-webi@pool-71-191-88-247.washdc.fios.verizon.net) has joined #ceph
[14:32] * jeff is now known as Guest2504
[14:32] <Guest2504> Hi. We're getting a problem where sync blocks indefinitely on an rbd. I can see the pending IO operation in /sys/block/rbd*/inflight. Is this a known problem?
[14:34] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[14:38] * sleinen (~Adium@2001:620:0:82::105) Quit (Read error: Connection reset by peer)
[14:40] * sleinen (~Adium@194.230.159.227) has joined #ceph
[14:41] * sleinen (~Adium@194.230.159.227) Quit ()
[14:42] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[14:42] * kawa2014 (~kawa@212.110.41.244) Quit ()
[14:43] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[14:44] * Kioob`Taff (~plug-oliv@2a01:e35:2e8a:1e0::42:10) Quit (Quit: Leaving.)
[14:52] * overclk (~overclk@121.244.87.117) Quit (Quit: Leaving)
[14:58] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) has joined #ceph
[14:59] <boolman> how is data divided into pg's? is there any docs about this?
[15:01] <boolman> eg if i create a 100GB image with rbd
[15:02] <doppelgrau> boolman: Data ist diveded into objects, and these objects are distributed
[15:04] <boolman> my xen pool with 4096 pg's and replication factor 2. does it split the file in 4096 chunks on all available osd's.
[15:05] <doppelgrau> boolman: by default images are split in 4 MB objects, but that can be changed
[15:05] * vbellur (~vijay@nat-pool-bos-u.redhat.com) has joined #ceph
[15:07] <boolman> doppelgrau: aha ok thanks
[15:07] <doppelgrau> boolman: http://ceph.com/docs/master/man/8/rbd/ (order)
[15:07] <boolman> size 10240 MB in 2560 objects
[15:07] <boolman> order 22 (4096 kB objects)
[15:07] * finster (~finster@2a01:4f8:d15:1000::2) has left #ceph
[15:09] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) has joined #ceph
[15:11] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[15:12] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:18] * doppelgrau_ (~doppelgra@pd956d116.dip0.t-ipconnect.de) has joined #ceph
[15:19] * mhack (~mhack@nat-pool-bos-t.redhat.com) has joined #ceph
[15:22] * doppelgrau (~doppelgra@pd956d116.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[15:22] * doppelgrau_ is now known as doppelgrau
[15:22] * kawa2014 (~kawa@212.110.41.244) Quit (Ping timeout: 480 seconds)
[15:22] * kawa2014 (~kawa@89.184.114.246) has joined #ceph
[15:23] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[15:30] * rdas (~rdas@121.244.87.116) Quit (Quit: Leaving)
[15:31] * squizzi (~squizzi@2602:306:bc59:85f0:3ea9:f4ff:fe5a:6064) has joined #ceph
[15:32] * primechuck (~primechuc@host-95-2-129.infobunker.com) has joined #ceph
[15:35] <zenpac> Is there any ceph training going on?
[15:39] * nardial (~ls@dslb-178-011-179-229.178.011.pools.vodafone-ip.de) has joined #ceph
[15:39] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[15:39] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[15:40] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) Quit (Quit: Ex-Chat)
[15:41] <doppelgrau> zenpac: There are the ???Ceph-Days??? or you can book someone with experience to give you an introduction
[15:41] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[15:42] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Connection reset by peer)
[15:44] * An_T_oine (~Antoine@192.93.37.4) has joined #ceph
[15:45] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) has joined #ceph
[15:45] * jpr (~jpr@138.26.125.8) has joined #ceph
[15:46] <zenpac> Why are there mkfs operations on both the monitor nodes and osd nodes? I'm working through a manual deployment and I see this as confusing.
[15:46] * squizzi (~squizzi@2602:306:bc59:85f0:3ea9:f4ff:fe5a:6064) Quit (Remote host closed the connection)
[15:48] * harold (~hamiller@71-94-227-123.dhcp.mdfd.or.charter.com) Quit ()
[15:50] * shylesh (~shylesh@121.244.87.124) Quit (Quit: Leaving)
[15:51] * zigo_ (~quassel@gplhost-3-pt.tunnel.tserv18.fra1.ipv6.he.net) Quit (Ping timeout: 480 seconds)
[15:53] * T1w (~jens@node3.survey-it.dk) Quit (Ping timeout: 480 seconds)
[15:59] * dyasny (~dyasny@104.158.35.152) has joined #ceph
[16:00] * ramonskie (ab15507e@107.161.19.109) has joined #ceph
[16:00] * debian112 (~bcolbert@24.126.201.64) has joined #ceph
[16:01] <ramonskie> it seems that the radosgw sock file is not created
[16:01] <ramonskie> is this a know issue on ubuntu?
[16:03] * zigo (~quassel@182.54.233.6) has joined #ceph
[16:06] * jclm (~jclm@50-206-204-8-static.hfc.comcastbusiness.net) has joined #ceph
[16:08] * bene2 (~ben@nat-pool-bos-t.redhat.com) has joined #ceph
[16:09] * Nacer_ (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[16:10] * bene (~ben@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:12] * karnan (~karnan@106.51.234.145) has joined #ceph
[16:13] <zenpac> I can't seem to get Mon data : http://paste.debian.net/252667/
[16:18] <zenpac> I get this same fault message in most of my mds logs.
[16:23] <Be-El> zenpac: so check the connection and state of your monitors
[16:23] <zenpac> Be-El: What connect, and how?
[16:24] <Be-El> zenpac: have a look at the error message. the ceph clients tries to connect to the monitors and fails
[16:24] <Be-El> zenpac: so check that the monitors are running, are binding to the correct network interface, and you can reach them on their IP addresses
[16:25] <zenpac> maybe my "auth supported = none" is wrong...
[16:26] <zenpac> I should probably be blank to use default cephx
[16:28] <zenpac> I show no 6789 port in use...
[16:29] <zenpac> netstat shows nothing
[16:30] * sjm (~sjm@49.32.0.217) Quit (Ping timeout: 480 seconds)
[16:30] * wujek_ (zok@neurosis.pl) Quit (Read error: Connection reset by peer)
[16:30] * wujek (zok@neurosis.pl) has joined #ceph
[16:31] <Be-El> zenpac: and that's why i said you need to check your monitors....
[16:32] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) has joined #ceph
[16:34] * gford (~fford@p509901f2.dip0.t-ipconnect.de) Quit (Quit: Leaving)
[16:36] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[16:37] * garphy is now known as garphy`aw
[16:37] * karnan (~karnan@106.51.234.145) Quit (Ping timeout: 480 seconds)
[16:38] * jwilkins (~jwilkins@2601:644:4100:bfef:ea2a:eaff:fe08:3f1d) has joined #ceph
[16:39] * yanzheng (~zhyan@182.139.20.134) Quit (Quit: This computer has gone to sleep)
[16:41] * jwilkins (~jwilkins@2601:644:4100:bfef:ea2a:eaff:fe08:3f1d) Quit ()
[16:42] <zenpac> I think I need to remake my mon filesystem..
[16:42] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[16:43] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) has joined #ceph
[16:44] * garphy`aw is now known as garphy
[16:44] * imcsk8 (~ichavero@189.231.13.18) Quit (Quit: Reconnecting)
[16:45] * imcsk8 (~ichavero@189.231.13.18) has joined #ceph
[16:48] * jtw (~jwilkins@2601:644:4100:bfef:ea2a:eaff:fe08:3f1d) has joined #ceph
[16:51] * CampGareth (~smuxi@ks3360145.kimsufi.com) Quit (Remote host closed the connection)
[16:51] * CampGareth (~smuxi@ks3360145.kimsufi.com) has joined #ceph
[16:51] * jwilkins (~jwilkins@c-67-180-123-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:52] * irq0 (~seri@cpu0.net) Quit (Ping timeout: 480 seconds)
[16:53] * DV_ (~veillard@2001:41d0:1:d478::1) Quit (Ping timeout: 480 seconds)
[16:54] * ramonskie (ab15507e@107.161.19.109) Quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client)
[16:54] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[16:58] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[17:00] * Tetard (~regnauld@x1.x0.dk) has joined #ceph
[17:01] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[17:01] * irq0 (~seri@amy.irq0.org) has joined #ceph
[17:01] * amote (~amote@121.244.87.116) Quit (Quit: Leaving)
[17:01] * Tetard_ (~regnauld@x1.x0.dk) Quit (Read error: Connection reset by peer)
[17:02] * visbits (~textual@8.29.138.28) has joined #ceph
[17:02] <visbits> anyone had luck using the s3 gateway with a client like transmit?
[17:02] <visbits> should be compatible.. no?
[17:02] * overclk (~overclk@61.3.109.10) has joined #ceph
[17:03] * kefu (~kefu@114.92.126.132) has joined #ceph
[17:04] * alram (~alram@cpe-172-250-2-46.socal.res.rr.com) Quit (Quit: leaving)
[17:04] * DV_ (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[17:04] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[17:05] * kanagaraj (~kanagaraj@nat-pool-bos-t.redhat.com) has joined #ceph
[17:06] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[17:07] <Guest2504> Hi. We're getting a problem where sync blocks indefinitely on an rbd. I can see the pending IO operation in /sys/block/rbd*/inflight. Is this a known problem?
[17:11] * kefu (~kefu@114.92.126.132) Quit (Ping timeout: 480 seconds)
[17:13] * oro (~oro@84-236-17-104.pool.digikabel.hu) Quit (Ping timeout: 480 seconds)
[17:15] * kefu (~kefu@114.92.126.132) has joined #ceph
[17:16] * oblu (~o@62.109.134.112) Quit (Quit: ~)
[17:17] * kefu (~kefu@114.92.126.132) Quit ()
[17:19] * georgem (~Adium@23-91-150-96.cpe.pppoe.ca) has joined #ceph
[17:19] * georgem (~Adium@23-91-150-96.cpe.pppoe.ca) Quit ()
[17:19] * georgem (~Adium@fwnat.oicr.on.ca) has joined #ceph
[17:21] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[17:24] * treenerd (~treenerd@cpe90-146-100-181.liwest.at) has left #ceph
[17:25] * ade (~abradshaw@tmo-113-122.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[17:26] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[17:27] * garphy is now known as garphy`aw
[17:29] * linjan (~linjan@195.110.41.9) Quit (Ping timeout: 480 seconds)
[17:31] * kefu (~kefu@114.92.126.132) has joined #ceph
[17:31] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) has joined #ceph
[17:31] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) Quit (Quit: bye!)
[17:31] * nardial (~ls@dslb-178-011-179-229.178.011.pools.vodafone-ip.de) Quit (Quit: Leaving)
[17:34] * garphy`aw is now known as garphy
[17:35] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[17:39] * kefu (~kefu@114.92.126.132) Quit (Ping timeout: 480 seconds)
[17:40] * kefu (~kefu@114.92.100.239) has joined #ceph
[17:40] * karnan (~karnan@106.51.234.145) has joined #ceph
[17:41] * shylesh__ (~shylesh@121.244.87.124) Quit (Remote host closed the connection)
[17:41] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[17:41] * derjohn_mobi (~aj@2001:6f8:1337:0:f563:ad4b:faaa:495f) Quit (Ping timeout: 480 seconds)
[17:44] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Ping timeout: 480 seconds)
[17:45] * marrusl (~mark@cpe-67-247-9-253.nyc.res.rr.com) has joined #ceph
[17:48] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) has joined #ceph
[17:50] * zenpac (~zenpac3@66.55.33.66) Quit (Quit: Leaving)
[17:54] * alram (~alram@206.169.83.146) has joined #ceph
[17:55] * kefu (~kefu@114.92.100.239) Quit (Max SendQ exceeded)
[17:56] * kefu (~kefu@114.92.100.239) has joined #ceph
[17:57] * branto (~branto@ip-213-220-214-203.net.upcbroadband.cz) Quit (Ping timeout: 480 seconds)
[17:58] * daviddcc (~dcasier@LAubervilliers-656-1-16-164.w217-128.abo.wanadoo.fr) has joined #ceph
[18:00] * sjm (~sjm@114.79.155.115) has joined #ceph
[18:00] * An_T_oine (~Antoine@192.93.37.4) Quit (Quit: Leaving)
[18:01] * sankarshan (~sankarsha@183.87.39.242) Quit (Quit: Are you sure you want to quit this channel (Cancel/Ok) ?)
[18:04] * squizzi (~squizzi@nat-pool-rdu-t.redhat.com) has joined #ceph
[18:05] * BranchPredictor (branch@predictor.org.pl) Quit (Remote host closed the connection)
[18:05] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) Quit (Remote host closed the connection)
[18:06] * cholcombe (~chris@c-73-180-29-35.hsd1.or.comcast.net) has joined #ceph
[18:06] * BranchPredictor (branch@predictor.org.pl) has joined #ceph
[18:09] * kawa2014 (~kawa@89.184.114.246) Quit (Quit: Leaving)
[18:10] * kawa2014 (~kawa@212.110.41.244) has joined #ceph
[18:12] * bobrik (~bobrik@83.243.64.45) has joined #ceph
[18:12] * jpr (~jpr@138.26.125.8) Quit (Quit: Leaving.)
[18:16] * moore (~moore@97-124-90-185.phnx.qwest.net) has joined #ceph
[18:19] * moore (~moore@97-124-90-185.phnx.qwest.net) Quit (Remote host closed the connection)
[18:20] * moore (~moore@64.202.160.233) has joined #ceph
[18:21] * zenpac (~zenpac3@66.55.33.66) has joined #ceph
[18:23] * dugravot6 (~dugravot6@nat-persul-montet.wifi.univ-lorraine.fr) Quit (Quit: Leaving.)
[18:24] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[18:25] * xarses_ (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:28] * joshd1 (~jdurgin@68-119-140-18.dhcp.ahvl.nc.charter.com) Quit (Quit: Leaving.)
[18:31] * kefu (~kefu@114.92.100.239) Quit (Max SendQ exceeded)
[18:32] * kefu (~kefu@114.92.100.239) has joined #ceph
[18:33] * andrewschoen_ is now known as andrewschoen
[18:37] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[18:41] * lifeboy (~roland@196.45.29.61) has joined #ceph
[18:41] <zenpac> Anyone know what is wrong with my ceph.conf: http://git.io/vL7Nf ? I'm unable to get mon working.
[18:42] * trociny (~mgolub@93.183.239.2) Quit (Ping timeout: 480 seconds)
[18:42] * dgurtner (~dgurtner@178.197.231.10) Quit (Ping timeout: 480 seconds)
[18:43] * lifeboy (~roland@196.45.29.61) Quit ()
[18:44] * oblu (~o@62.109.134.112) has joined #ceph
[18:44] * lifeboy (~roland@196.45.29.61) has joined #ceph
[18:45] * kanagaraj (~kanagaraj@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[18:46] * Be-El (~quassel@fb08-bcf-pc01.computational.bio.uni-giessen.de) Quit (Remote host closed the connection)
[18:46] <zenpac> mon log says: unable to read magic from mon data.. did you run mkcephfs? ( I thought i did run mkcephfs in : ceph-mon --mkfs -i .... step..
[18:47] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[18:50] * tom (~tom@167.88.45.146) Quit (Remote host closed the connection)
[18:50] * rlrevell (~leer@vbo1.inmotionhosting.com) has joined #ceph
[18:50] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) Quit (Quit: Leaving.)
[18:51] * wschulze (~wschulze@cpe-69-206-240-164.nyc.res.rr.com) has joined #ceph
[18:52] * xarses_ (~xarses@12.164.168.117) has joined #ceph
[18:53] * Concubidated (~Adium@23.91.33.7) Quit (Quit: Leaving.)
[18:54] * TheSov (~TheSov@cip-248.trustwave.com) has joined #ceph
[18:54] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[18:55] * nsoffer (~nsoffer@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[18:55] <zenpac> Looks like I'm unable to contact any othe rmons ..
[18:57] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Ping timeout: 480 seconds)
[18:58] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[19:02] * ngoswami (~ngoswami@121.244.87.116) Quit (Quit: Leaving)
[19:02] * moore (~moore@64.202.160.233) Quit (Remote host closed the connection)
[19:03] * moore (~moore@64.202.160.233) has joined #ceph
[19:03] * kawa2014 (~kawa@212.110.41.244) Quit (Quit: Leaving)
[19:08] * Sysadmin88 (~IceChat77@2.124.164.69) has joined #ceph
[19:09] <zenpac> I use this type of command to create the filsystem: ceph-mon --mkfs -i {{ceph_idx}} --monmap /etc/ceph/monmap --keyring /etc/ceph/ceph.mon.keyring
[19:09] <zenpac> But the mon service doesn start on 6789
[19:12] * tom (~tom@167.88.45.146) has joined #ceph
[19:19] <zenpac> ceph-mon -d seems to give: http://git.io/vL5mK
[19:20] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (Ping timeout: 480 seconds)
[19:21] * overclk (~overclk@61.3.109.10) Quit (Quit: Leaving)
[19:22] * linjan (~linjan@80.179.241.26) has joined #ceph
[19:24] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[19:26] * garphy is now known as garphy`aw
[19:34] * Hemanth (~Hemanth@117.192.243.80) has joined #ceph
[19:36] * Kupo1 (~tyler.wil@23.111.254.159) has joined #ceph
[19:36] * rlrevell1 (~leer@vbo1.inmotionhosting.com) has joined #ceph
[19:41] * kefu (~kefu@114.92.100.239) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:42] * rlrevell (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[19:42] * kefu (~kefu@114.92.100.239) has joined #ceph
[19:42] * Lyncos (~lyncos@208.71.184.41) has joined #ceph
[19:42] * Sysadmin88 (~IceChat77@2.124.164.69) Quit (Quit: ASCII a stupid question, get a stupid ANSI!)
[19:42] * yguang11_ (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:42] * yguang11_ (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[19:42] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[19:42] * yguang11_ (~yguang11@2001:4998:effd:600:e048:4b9e:b69f:dd57) has joined #ceph
[19:42] * yguang11_ (~yguang11@2001:4998:effd:600:e048:4b9e:b69f:dd57) Quit (Remote host closed the connection)
[19:43] * trociny (~Mikolaj@91.225.202.4) has joined #ceph
[19:44] <Lyncos> Hi, Just wondering some stuff... I'm experiencing a big difference in speed and latency as soon as I change the replication from 1 to 2 .. I know this is expected .. but in a factor of 2 ? from 1200MB/s to 500MB/s ? also the latency is twice as high
[19:44] * reed (~reed@75-101-54-131.dsl.static.fusionbroadband.com) has joined #ceph
[19:45] <Lyncos> I'm not saturating network or disks
[19:45] * kefu (~kefu@114.92.100.239) Quit ()
[19:46] * circ-user-5fQIm (~circuser-@50.46.225.207) Quit (Remote host closed the connection)
[19:47] <Lyncos> min_size is set at 1
[19:54] <TheSov> well size = 1 is striping ALL OSD's
[19:54] <TheSov> size = 2 is basically mirroring the osds' so yes writing the same thing twice will take twice as long :)
[19:56] <TheSov> size = 3 is writing every byte 3 times so that will slow it down too. its the trade off right? all the disks in your osds go X speed. you add it all up and divide that by the number of copies to get the total bandwidth
[19:56] <TheSov> 1200MB's is still how fast your disks go, but total bandwidth is now divided by 2
[19:56] <TheSov> so yeah it will be ~600ish
[19:58] <TheSov> there is obviously some overhead involvement for replication so its slightly slower than that
[19:58] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) has joined #ceph
[19:58] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) Quit (Remote host closed the connection)
[19:59] <Lyncos> I'm running of MicronSSD
[19:59] <Lyncos> 2 per server X 6 servers
[19:59] <Lyncos> each cards can do 3GB/s
[20:00] <Lyncos> If I run a 2nd test on a 2nd client at same time.. the speed is divided by 2 .. even if I have plenty of bandwidth / disk IO
[20:00] * TheSov3 (~TheSov@38.106.143.234) has joined #ceph
[20:01] * Concubidated (~Adium@192.170.161.35) has joined #ceph
[20:01] * TheSov4 (~TheSov@204.13.200.248) has joined #ceph
[20:01] <TheSov4> sorry got dc'd
[20:01] <Lyncos> np
[20:01] <TheSov4> but yes i know you have fast ssd's
[20:01] <TheSov4> that doesnt stop invidual computers from being bottlenecks
[20:02] <TheSov4> how much ram and cpu do you have
[20:02] <TheSov4> per osd
[20:02] <Lyncos> 128G ram
[20:02] <Lyncos> I run 24 OSD spinning
[20:02] <Lyncos> and 2 micron ssd osd
[20:02] <Lyncos> I get 67G cached and 25G free
[20:02] <TheSov4> whoa whoa whoa, you run 12 rust to 1 ssd?
[20:03] <TheSov4> that is a high split, 4 to 1 is the recommendation
[20:03] <TheSov4> remember its all random IO
[20:03] <TheSov4> 12x randomio even for ssd is pretty high
[20:03] <Lyncos> what you mean 12 rust
[20:03] <TheSov4> rust, spindles
[20:03] <TheSov4> iron oxide disks
[20:03] <Lyncos> SSD are not set as journal
[20:04] <TheSov4> oh they are tiered"?
[20:04] <Lyncos> right now I did set a pool running off OSD only
[20:04] <Lyncos> I have 12 micron cards accross 6 servers set as a pool
[20:05] <Lyncos> each servers have 1x 10gig interface
[20:05] <TheSov4> you have a bottleneck somewhere.
[20:05] <doppelgrau> Lyncos: twice the latency is expected, you do 2 writes morw or less sequential with size=2
[20:05] <Lyncos> I remember having better perf than that before
[20:06] <Lyncos> how can I get only 200Mb/s writes on that kind of hardware
[20:06] <Lyncos> 3 copies seems pretty descent no ?
[20:06] <TheSov4> your 10gig switch might suck
[20:06] <Lyncos> I tested with Iperf ..
[20:06] * rotbeard (~redbeard@2a02:908:df10:d300:76f0:6dff:fe3b:994d) has joined #ceph
[20:06] <doppelgrau> Lyncos: client => primary OSD (starts writing) => secondary(s) (write) => secondary(s) send ACK to primary => primary waits for IO to finish (if not happend before) => primary sends ACK to client
[20:06] <TheSov4> iperf sends 0'd packets
[20:06] <TheSov4> its very easy to transmit silence
[20:06] <Lyncos> ok
[20:07] <Lyncos> I'll try to actually send real data
[20:07] * TheSov (~TheSov@cip-248.trustwave.com) Quit (Ping timeout: 480 seconds)
[20:07] <doppelgrau> Lyncos: so size 1 => 2 an increase in the latency is expected, size2 =>3 only minor increase
[20:08] <Lyncos> I remember having better performances before
[20:08] * TheSov3 (~TheSov@38.106.143.234) Quit (Ping timeout: 480 seconds)
[20:08] <Lyncos> even with Size=3 i was able to do 800-900ish not 200
[20:09] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[20:09] <doppelgrau> Lyncos: sequential or parallel workload?
[20:09] <Lyncos> simple rados bench
[20:10] * sjm (~sjm@114.79.155.115) Quit (Ping timeout: 480 seconds)
[20:10] <Lyncos> I know rados bench is not perfect.. but still
[20:10] <Lyncos> was having better perf before
[20:10] <Lyncos> I did test on a 40Gbit/s network and having good perf
[20:11] <Lyncos> 1073741824 bytes (1.1 GB) copied, 0.891074 s, 1.2 GB/s
[20:11] <Lyncos> I get full speed to each servers
[20:12] <doppelgrau> Lyncos: ok, then you???ll have somewhere a problem in your network or using other parameters for the benachmark (different thread count?)
[20:13] <Lyncos> I'll try higher thread count
[20:13] <Lyncos> nice
[20:13] <Lyncos> something is wrong with my setup
[20:14] <Lyncos> doing bench on micronSSD dedicated pool
[20:14] <Lyncos> creates 0 writes on the microns ssd
[20:15] <Lyncos> I think I have problem with names
[20:16] <Lyncos> in crushmap
[20:17] <doppelgrau> Lyncos: using the same object-Names for platter and SSD-root?
[20:18] <Lyncos> for the ruleset and the pool (root)
[20:18] <doppelgrau> Lyncos: did run in the same problem, started using ???-platter??? or ???-ssd??? as a suffix
[20:18] <Lyncos> I renamed root to pool
[20:18] <Lyncos> I changed it but no luck
[20:18] <Lyncos> seems like my rule is fucked up
[20:18] <Lyncos> I try to delete the pool and re-create
[20:18] <Lyncos> still same problem
[20:19] <Lyncos> lol I don't know where the writes goes
[20:20] <doppelgrau> Lyncos: can you put your crushmap somewhere online and a ceph osd tree?
[20:20] <Lyncos> yeah this is what I was doing :-)
[20:20] <Lyncos> http://pastebin.com/L94tS2pB
[20:20] <Lyncos> and
[20:20] <Lyncos> http://pastebin.com/1rCdraTN
[20:22] * jtw (~jwilkins@2601:644:4100:bfef:ea2a:eaff:fe08:3f1d) Quit (Ping timeout: 480 seconds)
[20:22] <Lyncos> Writes are going to the CDN log pool .. wierd
[20:23] <Lyncos> even if the micronSSD pool is specified with ruleset 4
[20:23] * yguang11 (~yguang11@2001:4998:effd:600:ed71:5957:da87:7c51) has joined #ceph
[20:23] * yguang11 (~yguang11@2001:4998:effd:600:ed71:5957:da87:7c51) Quit (Remote host closed the connection)
[20:23] <doppelgrau> Lyncos: default-Pool should be your SSD-Pool?
[20:23] <Lyncos> no
[20:23] <Lyncos> default is the spinnings
[20:24] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:24] <Lyncos> look
[20:24] <Lyncos> pool 78 'micronSSD' replicated size 3 min_size 2 crush_ruleset 3 object_hash rjenkins pg_num 200 pgp_num 200 last_change 350804 flags hashpspool stripe_width 0
[20:24] * karnan (~karnan@106.51.234.145) Quit (Ping timeout: 480 seconds)
[20:24] <Lyncos> damn
[20:24] <Lyncos> rule 3
[20:24] <Lyncos> I just seen it
[20:24] * nhm (~nhm@184-97-242-33.mpls.qwest.net) Quit (Ping timeout: 480 seconds)
[20:24] <Lyncos> sorry being soo... blind
[20:25] <Lyncos> let's test again
[20:25] * shylesh (~shylesh@123.136.237.110) has joined #ceph
[20:25] <Lyncos> ok the writes goes to the right place now.. but still having similar speed
[20:27] <Lyncos> http://pastebin.com/w7BGn0Nc
[20:27] <Lyncos> ceph version 0.94.2
[20:28] <doppelgrau> Lyncos: how fast are your SSDs wenn making direct IO (or synchronous IO)
[20:28] <Lyncos> let's test
[20:29] <Lyncos> dd if=/dev/zero of=test.img bs=1G count=5 oflag=direct
[20:29] * yguang11_ (~yguang11@2001:4998:effd:600:11f6:3c86:6c6d:dd5) has joined #ceph
[20:29] <Lyncos> 5368709120 bytes (5.4 GB) copied, 5.99029 s, 896 MB/s
[20:29] <doppelgrau> Lyncos: use bs=4M, that is more like the raddos bench
[20:29] <Lyncos> k
[20:30] * yguang11 (~yguang11@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) Quit (Read error: Connection reset by peer)
[20:30] <Lyncos> quite slow
[20:30] <Lyncos> 204800000 bytes (205 MB) copied, 15.9428 s, 12.8 MB/s
[20:31] <doppelgrau> Lyncos: so you have your answer now (more or less)
[20:31] <Lyncos> this is strange
[20:31] <doppelgrau> Lyncos: raddos bench with larger objects should perform better than
[20:31] <Lyncos> ok will try
[20:31] <Lyncos> just to doublecheck
[20:32] * OnTheRock (~overonthe@199.68.193.54) has joined #ceph
[20:32] <Lyncos> 10000000 objsize made an improvment
[20:33] <Lyncos> but I agree my ssd looks slow
[20:33] <Lyncos> /dev/rssdb1 on /var/lib/ceph/osd/ceph-74 type xfs (rw,noatime,attr2,inode64,noquota)
[20:34] * karnan (~karnan@106.51.232.8) has joined #ceph
[20:34] * daviddcc (~dcasier@LAubervilliers-656-1-16-164.w217-128.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[20:37] <zenpac> I'm stuck here: http://git.io/vL5yq
[20:38] <Lyncos> I upgraded kernel to latest version... maybe driver was fucked up
[20:39] <zenpac> I'm setting up 3 nodes, each symmetric with mon, mds, and osd .
[20:40] <zenpac> But it hangs on the mon bootstrapping ..
[20:42] * toMeloos (~toMeloos@53568B3D.cm-6-7c.dynamic.ziggo.nl) has joined #ceph
[20:44] * bstillwell (~bryan@bokeoa.com) has joined #ceph
[20:45] <bstillwell> Is there a way to use ceph-disk to prepare an OSD and use an existing journal partition on an SSD?
[20:46] <bstillwell> For example I had an HDD die recently and the journal is on /dev/sdd2
[20:46] <bstillwell> I want it to continue using sdd2 instead of creating sdd4
[20:47] * jtw (~jwilkins@2601:644:4100:bfef:ea2a:eaff:fe08:3f1d) has joined #ceph
[20:51] * alfredodeza (~alfredode@198.206.133.89) Quit (Ping timeout: 480 seconds)
[20:51] * tupper (~tcole@2001:420:2280:1272:8900:f9b8:3b49:567e) Quit (Ping timeout: 480 seconds)
[20:54] <toMeloos> Hi, I'm trying to connect radosgw to keystone. Followed the manuals and I can use radosgw with its own credentials but not with keystone credentials. Running with rgw debug level 20, radosgw logs "failed to authorize request" when I try to connect using the swift client. How can I find out what's going wrong with the authorization request?
[20:55] <wolsen> anyone know anything about http://gitbuilder.ceph.com/libapache-mod-fastcgi-deb-$(lsb_release -sc)-x86_64-basic/ref/master no longer being on the gitbuilder.ceph.com for apache w/ 100 continue support for debian based distros?
[20:55] <wolsen> ooops, wrong url -its the http://gitbuilder.ceph.com/apache2-deb...
[20:55] <wolsen> rather than just the libapache-mod-fastcgi-deb
[20:56] * TheSov (~TheSov@cip-248.trustwave.com) has joined #ceph
[20:56] * shylesh (~shylesh@123.136.237.110) Quit (Remote host closed the connection)
[20:59] * TheSov4 (~TheSov@204.13.200.248) Quit (Read error: Connection reset by peer)
[21:06] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[21:08] * arufuredosan (~oftc-webi@c-73-184-229-245.hsd1.ga.comcast.net) has joined #ceph
[21:09] <zenpac> Do any of my component indices need to start at 0 or have non-numerical id's?
[21:10] * karnan (~karnan@106.51.232.8) Quit (Ping timeout: 480 seconds)
[21:13] <zenpac> I was able to get Mon running and listening on 6789 on all hosts..
[21:14] * shohn1 (~shohn@dslb-188-102-008-068.188.102.pools.vodafone-ip.de) Quit (Quit: Leaving.)
[21:15] * alfredodeza (~alfredode@198.206.133.89) has joined #ceph
[21:18] <zenpac> I think now its just not finding the right folder structure it expects in the XFS mounts
[21:23] <Anticimex> hmm - https://bugs.launchpad.net/nova/+bug/1467570
[21:24] * arufuredosan (~oftc-webi@c-73-184-229-245.hsd1.ga.comcast.net) Quit (Quit: Page closed)
[21:27] * jclm (~jclm@50-206-204-8-static.hfc.comcastbusiness.net) Quit (Quit: Leaving.)
[21:29] * dynamicudpate (~overonthe@199.68.193.62) has joined #ceph
[21:30] <TheSov> does anyone know if the ceph devs intend on fully populating the armhf repos?
[21:30] <TheSov> they seem mostly empty
[21:32] * jclm (~jclm@50-206-204-8-static.hfc.comcastbusiness.net) has joined #ceph
[21:34] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has joined #ceph
[21:34] * t0rn (~ssullivan@2607:fad0:32:a02:56ee:75ff:fe48:3bd3) has left #ceph
[21:36] * OnTheRock (~overonthe@199.68.193.54) Quit (Ping timeout: 480 seconds)
[21:40] * daviddcc (~dcasier@84.197.151.77.rev.sfr.net) has joined #ceph
[21:53] <mongo> TheSov: I am sure they would be willing to take pull requests.
[21:53] <TheSov> mongo, i dont know how that works
[21:54] <TheSov> the last time i pulled and compiled code was on slackware 8
[21:56] <TheSov> i just find it odd that the ceph debian repo's have armhf and but nearly nothing in them, i see ceph-deploy but none of the ceph core
[21:57] * visbits (~textual@8.29.138.28) Quit (Quit: Textual IRC Client: www.textualapp.com)
[21:57] <zenpac> Do i need a separated partition for a ceph journal?
[21:58] * Swert (~oftc-webi@c83-253-71-20.bredband.comhem.se) has joined #ceph
[22:00] <zenpac> I'm using /dev/sda2 and /dev'sda3 for data/journal.. root partition in /dev/sda1
[22:00] <TheSov> zenpac, is this for testing or usage?
[22:00] * Swert (~oftc-webi@c83-253-71-20.bredband.comhem.se) Quit (Remote host closed the connection)
[22:01] <zenpac> Testing only.. I know performance will suck.
[22:01] * omar_m (~omar_m@12.164.168.117) has joined #ceph
[22:01] * omar_m (~omar_m@12.164.168.117) Quit ()
[22:03] <TheSov> ok
[22:04] <TheSov> yes u need a partition or seperate disks for data and journal
[22:04] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[22:04] <TheSov> are you using ceph-deploy or working manually?
[22:06] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[22:08] * xarses_ (~xarses@12.164.168.117) Quit (Remote host closed the connection)
[22:13] <zenpac> TheSov: manually. I have 2 partitions for data /journal
[22:13] * Hemanth (~Hemanth@117.192.243.80) Quit (Quit: Leaving)
[22:13] <zenpac> Can I use ceph-disk to prepare them?
[22:17] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[22:20] <TheSov> yes
[22:25] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:27] * DV (~veillard@2001:41d0:1:d478::1) has joined #ceph
[22:28] <zenpac> I don't see any options in ceph-disk to work on a partitcular partition.
[22:29] <zenpac> I'm worried about destroyig the entire disk.
[22:31] <TheSov> dont zap the disk
[22:33] * dgurtner (~dgurtner@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:33] * bene2 (~ben@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[22:33] * toMeloos (~toMeloos@53568B3D.cm-6-7c.dynamic.ziggo.nl) Quit (Quit: Ik ga weg)
[22:34] * garphy`aw is now known as garphy
[22:36] * tupper (~tcole@173.38.117.80) has joined #ceph
[22:38] <zenpac> http://paste.debian.net/253383/ ?
[22:39] * trociny (~Mikolaj@91.225.202.4) Quit (Quit: away)
[22:42] <zenpac> This worked better: ceph-disk-prepare --cluster ceph --fs-type xfs /var/lib/ceph/osd/ceph-1 /dev/sda3
[22:47] * badone__ (~brad@CPE-121-215-241-179.static.qld.bigpond.net.au) has joined #ceph
[22:47] * todin (tuxadero@kudu.in-berlin.de) Quit (Remote host closed the connection)
[22:49] * vbellur (~vijay@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[22:53] * georgem (~Adium@fwnat.oicr.on.ca) Quit (Quit: Leaving.)
[22:53] * cok (~chk@test.roskilde-festival.dk) has joined #ceph
[22:53] * ganders (~root@190.2.42.21) Quit (Quit: WeeChat 0.4.2)
[22:54] * cok (~chk@test.roskilde-festival.dk) has left #ceph
[22:54] * nsoffer (~nsoffer@bzq-79-180-80-9.red.bezeqint.net) has joined #ceph
[23:00] * garphy is now known as garphy`aw
[23:06] * visbits (~textual@8.29.138.28) has joined #ceph
[23:10] * Pommesgabel (~nastidon@tor-exit-node.7by7.de) has joined #ceph
[23:13] * Sysadmin88 (~IceChat77@2.124.164.69) has joined #ceph
[23:15] * rlrevell1 (~leer@vbo1.inmotionhosting.com) Quit (Ping timeout: 480 seconds)
[23:21] * ktdreyer|afk is now known as ktdreyer
[23:24] * Pommesgabel (~nastidon@7R2AABW1O.tor-irc.dnsbl.oftc.net) Quit (Remote host closed the connection)
[23:28] * DV (~veillard@2001:41d0:1:d478::1) Quit (Remote host closed the connection)
[23:30] * DV (~veillard@2001:41d0:a:f29f::1) has joined #ceph
[23:37] * Guest2504 (~oftc-webi@pool-71-191-88-247.washdc.fios.verizon.net) Quit (Remote host closed the connection)
[23:38] * DV (~veillard@2001:41d0:a:f29f::1) Quit (Ping timeout: 480 seconds)
[23:42] * SpaceDum1 is now known as SpaceDump
[23:43] * florz (nobody@2001:1a50:503c::2) Quit (Read error: Connection reset by peer)
[23:43] * florz (nobody@2001:1a50:503c::2) has joined #ceph
[23:43] * linjan (~linjan@80.179.241.26) Quit (Ping timeout: 480 seconds)
[23:43] * MentalRay (~MRay@MTRLPQ42-1176054809.sdsl.bell.ca) Quit (Quit: This computer has gone to sleep)
[23:46] * fridim_ (~fridim@56-198-190-109.dsl.ovh.fr) Quit (Ping timeout: 480 seconds)
[23:54] * oro (~oro@212-40-76-145.pool.digikabel.hu) has joined #ceph
[23:56] * murmur (~murmur@zeeb.org) Quit (Ping timeout: 480 seconds)
[23:59] * murmur (~murmur@zeeb.org) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.