#ceph IRC Log

Index

IRC Log for 2014-07-15

Timestamps are in GMT/BST.

[0:02] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[0:07] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:07] <lupu> angdraug: pasted some version info about librados.so and librbd.so http://paste.openstack.org/show/86459/
[0:10] <angdraug> looks the same to me
[0:11] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Read error: No route to host)
[0:11] * Nacer_ (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[0:12] <lupu> i have not touched the libs outside "apt-get upgrade"
[0:13] * b0e (~aledermue@x2f30859.dyn.telefonica.de) Quit (Quit: Leaving.)
[0:13] <lupu> i will try to reproduce this behavior outside nova-compute
[0:13] <angdraug> sage: speaking of librbd vs librados versions, are there plans for ABI versioning in ceph?
[0:14] <angdraug> lupu: yes, as I said, everything you need to replicate it outside of nova is in rbd_utils.py
[0:14] <angdraug> let me know if you need help with that
[0:14] <lupu> i will try my best :D
[0:15] <angdraug> thanks!
[0:16] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[0:24] * rvhi (~rvhi@23.91.33.4) has joined #ceph
[0:25] <rvhi> hello!
[0:25] <rvhi> our mon process can't start,
[0:25] <rvhi> 2014-07-14 12:04:17.034407 7fa86ddcb700 -1 mon.storage1@0(electing).elector(147) Shutting down because I do not support required monitor features: { compat={},rocompat={},incompat={} }
[0:25] <rvhi> it was ok before, not sure what changed
[0:28] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[0:29] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[0:29] * humbolt (~elias@chello080109074153.4.15.vie.surfer.at) Quit (Read error: No route to host)
[0:29] * humbolt (~elias@chello080109074153.4.15.vie.surfer.at) has joined #ceph
[0:31] <cookednoodles> rvhi, update your kernel
[0:31] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[0:37] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[0:38] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[0:38] * Nacer_ (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[0:39] * rturk is now known as rturk|afk
[0:39] * rturk|afk is now known as rturk
[0:42] * markbby (~Adium@168.94.245.3) has joined #ceph
[0:42] <joao> rvhi, what did you change?
[0:43] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[0:45] * baylight (~tbayly@74-220-196-40.unifiedlayer.com) has joined #ceph
[0:53] * humbolt (~elias@chello080109074153.4.15.vie.surfer.at) Quit (Ping timeout: 480 seconds)
[0:54] * baylight (~tbayly@74-220-196-40.unifiedlayer.com) Quit (Quit: Leaving.)
[0:54] * fsimonce (~simon@host50-69-dynamic.46-79-r.retail.telecomitalia.it) Quit (Quit: Coyote finally caught me)
[0:56] * rendar (~I@host37-118-dynamic.53-82-r.retail.telecomitalia.it) Quit ()
[0:57] * hedin_ (~hedin@81.25.179.168) has joined #ceph
[0:57] * hedin (~hedin@81.25.179.168) Quit (Read error: Connection reset by peer)
[0:58] * rturk is now known as rturk|afk
[1:01] * Hell_Fire (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[1:01] * Hell_Fire__ (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (Read error: Connection reset by peer)
[1:02] * baylight1 (~tbayly@69-195-66-4.unifiedlayer.com) Quit (Read error: Connection reset by peer)
[1:05] * hedin_ (~hedin@81.25.179.168) Quit (Ping timeout: 480 seconds)
[1:07] * zack_dolby (~textual@p67f6b6.tokynt01.ap.so-net.ne.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[1:10] * hedin (~hedin@81.25.179.168) has joined #ceph
[1:12] * hedin (~hedin@81.25.179.168) Quit (Remote host closed the connection)
[1:12] * rvhi (~rvhi@23.91.33.4) Quit (Ping timeout: 480 seconds)
[1:16] * bkopilov (~bkopilov@213.57.16.134) Quit (Ping timeout: 480 seconds)
[1:17] * bandrus (~Adium@4.31.55.106) Quit (Quit: Leaving.)
[1:23] * rweeks (~rweeks@pat.hitachigst.com) Quit (Quit: Leaving)
[1:28] * thb (~me@0001bd58.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:36] * oms101 (~oms101@p20030057EA03BE00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:41] * lofejndif (~lsqavnbok@7DKAABJ51.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[1:44] * rvhi (~rvhi@23.91.33.2) has joined #ceph
[1:45] * oms101 (~oms101@p20030057EA035F00EEF4BBFFFE0F7062.dip0.t-ipconnect.de) has joined #ceph
[1:47] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[1:50] * LeaChim (~LeaChim@host86-161-90-156.range86-161.btcentralplus.com) Quit (Read error: Operation timed out)
[1:50] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[1:53] * b0e (~aledermue@x2f30859.dyn.telefonica.de) has joined #ceph
[1:54] * analbeard (~shw@support.memset.com) Quit (Remote host closed the connection)
[1:55] * analbeard (~shw@support.memset.com) has joined #ceph
[1:56] * rvhi (~rvhi@23.91.33.2) Quit (Read error: Operation timed out)
[1:57] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[1:58] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[1:59] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Read error: Operation timed out)
[2:00] * b0e (~aledermue@x2f30859.dyn.telefonica.de) Quit (Quit: Leaving.)
[2:02] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[2:02] * zack_dolby (~textual@e0109-114-22-13-18.uqwimax.jp) has joined #ceph
[2:02] * rmoe (~quassel@12.164.168.117) Quit (Read error: Operation timed out)
[2:08] * kevinc (~kevinc__@client65-40.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[2:15] * bandrus (~Adium@66.87.64.223) has joined #ceph
[2:16] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[2:19] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[2:20] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[2:31] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[2:36] * sputnik1_ (~sputnik13@207.8.121.241) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[2:38] * sarob (~sarob@2001:4998:effd:600:6854:720b:8673:2d0b) Quit (Remote host closed the connection)
[2:38] * sarob (~sarob@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) has joined #ceph
[2:39] * zerick (~eocrospom@190.187.21.53) Quit (Ping timeout: 480 seconds)
[2:40] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) Quit (Quit: Leaving.)
[2:41] * sarob (~sarob@nat-dip30-wl-d.cfw-a-gci.corp.yahoo.com) Quit (Read error: Operation timed out)
[2:41] * joef (~Adium@2620:79:0:131:8c50:6c5b:2907:d6b3) Quit (Remote host closed the connection)
[2:42] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[2:44] * KaZeR (~kazer@64.201.252.132) Quit (Remote host closed the connection)
[2:45] * linuxkidd (~linuxkidd@cpe-066-057-019-145.nc.res.rr.com) has joined #ceph
[2:53] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[2:53] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[2:54] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[2:57] * carif (~mcarifio@pool-173-76-155-34.bstnma.fios.verizon.net) has joined #ceph
[3:00] * xarses (~andreww@12.164.168.117) Quit (Ping timeout: 480 seconds)
[3:05] * markbby (~Adium@168.94.245.3) Quit (Remote host closed the connection)
[3:06] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) Quit (Ping timeout: 480 seconds)
[3:09] * narb (~Jeff@38.99.52.10) Quit (Quit: narb)
[3:11] * shang (~ShangWu@175.41.48.77) has joined #ceph
[3:16] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[3:21] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[3:22] * carif (~mcarifio@pool-173-76-155-34.bstnma.fios.verizon.net) Quit (Quit: Ex-Chat)
[3:35] * rvhi (~rvhi@23.91.33.4) has joined #ceph
[3:38] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) has joined #ceph
[3:47] * mjeanson (~mjeanson@00012705.user.oftc.net) Quit (Remote host closed the connection)
[3:50] * mjeanson (~mjeanson@bell.multivax.ca) has joined #ceph
[3:53] * zhaochao (~zhaochao@106.38.204.72) has joined #ceph
[3:54] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[4:14] * markbby (~Adium@168.94.245.4) has joined #ceph
[4:18] * huangjun (~kvirc@59.173.185.197) has joined #ceph
[4:25] * bkopilov (~bkopilov@213.57.16.55) has joined #ceph
[4:27] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[4:38] * bandrus (~Adium@66.87.64.223) Quit (Quit: Leaving.)
[4:41] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[4:44] * angdraug (~angdraug@12.164.168.117) Quit (Quit: Leaving)
[4:49] <Jakey> hi i am having this problem
[4:49] <Jakey> https://www.irccloud.com/pastebin/xDB74YOB
[4:49] <Jakey> @ joao
[4:50] <Jakey> my monitors is on the same node as the admin
[4:51] <cookednoodles> erm why ?
[4:51] <cookednoodles> thats a very weird setup
[4:52] * adamcrume (~quassel@2601:9:6680:47:2418:eae9:ec40:1b71) Quit (Remote host closed the connection)
[4:57] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[4:58] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[5:00] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[5:01] * lupu (~lupu@86.107.101.214) Quit (Ping timeout: 480 seconds)
[5:04] * AfC1 (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[5:07] * baylight (~tbayly@204.15.85.169) has joined #ceph
[5:07] * baylight (~tbayly@204.15.85.169) has left #ceph
[5:12] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Ping timeout: 480 seconds)
[5:18] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[5:19] * Cube1 (~Cube@66-87-64-223.pools.spcsdns.net) Quit (Quit: Leaving.)
[5:22] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) has joined #ceph
[5:28] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Quit: Ex-Chat)
[5:31] * vbellur (~vijay@122.167.88.42) Quit (Read error: Connection reset by peer)
[5:34] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[5:37] * reed (~reed@75-101-54-131.dsl.static.sonic.net) Quit (Quit: Ex-Chat)
[5:43] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[5:44] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) has joined #ceph
[5:44] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Read error: Operation timed out)
[5:45] * trond (~trond@evil-server.alseth.info) Quit (Quit: Lost terminal)
[5:45] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Read error: Operation timed out)
[5:46] * vbellur (~vijay@122.166.167.181) has joined #ceph
[5:46] * trond (~trond@evil-server.alseth.info) has joined #ceph
[5:46] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[5:47] * Vacum (~vovo@88.130.211.91) has joined #ceph
[5:48] <trond> Hi. I'm runnning kvm+librbd. Would I need to restart the kvm process after adding an extra monitor for the kvm process to find the new monitor? Or will it read the cluster state live from existing monitors to discover the new monitor?
[5:50] <iggy> trond: why do you ask? are you planning on removing the mon's that it was started with?
[5:50] <trond> yes
[5:50] <dmick> the cluster connection is dynamic
[5:51] <dmick> and persistent
[5:51] <iggy> yeah, you should be fine
[5:51] <dmick> if you maintain quorum, no one should notice, although you may need to change your 'initial monitors list' to handle reboot
[5:51] <Jakey> dmick: https://www.irccloud.com/pastebin/xDB74YOB
[5:52] <dmick> i.e. mon_initial_members
[5:52] <dmick> and/or mon_host
[5:52] <Jakey> dmick: ?
[5:53] <dmick> is there a question?
[5:53] <dmick> I see the middle of a big dump of log
[5:53] <dmick> <shrug>
[5:53] <Jakey> [ceph@node7 m_cluster]$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
[5:53] <Jakey> [ceph@node7 m_cluster]$ ceph health
[5:53] <Jakey> 2014-07-14 22:00:22.736467 7fce271d1700 0 monclient(hunting): authenticate timed out after 300
[5:53] <Jakey> 2014-07-14 22:00:22.736500 7fce271d1700 0 librados: client.admin authentication error (110) Connection timed out
[5:53] <Jakey> Error connecting to cluster: TimedOut
[5:53] <Jakey> dmick: ^^^
[5:53] <dmick> yeah, something's wrong with your cluster
[5:54] <Jakey> i run ceph and it keeps timing out
[5:54] * Vacum_ (~vovo@i59F79FEB.versanet.de) Quit (Ping timeout: 480 seconds)
[5:54] * rvhi (~rvhi@23.91.33.4) Quit (Read error: Operation timed out)
[5:54] <Jakey> dmick: my monitors is install on the same admin node
[5:55] <dmick> ok
[5:57] <Jakey> do you know why its timig out
[5:57] <Jakey> i can't run the "ceph" command
[5:57] <dmick> when the cluster's not healthy, yes, you can't
[5:58] <Jakey> so what should i do
[5:59] <Jakey> i'm stuck here
[5:59] <dmick> what do you expect sudo chmod +r /etc/ceph/ceph.client.admin.keyring to do?
[5:59] <Jakey> dmick: i just following the guide
[5:59] <Jakey> http://ceph.com/docs/master/start/quick-ceph-deploy/
[6:03] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Quit: Leaving.)
[6:17] * vbellur (~vijay@122.166.167.181) Quit (Ping timeout: 480 seconds)
[6:19] * michalefty (~micha@188-195-129-145-dynip.superkabel.de) Quit (Quit: Leaving.)
[6:19] * Cube (~Cube@66.87.64.223) has joined #ceph
[6:19] * theanalyst (~abhi@49.32.3.36) has joined #ceph
[6:24] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[6:27] * trond (~trond@evil-server.alseth.info) Quit (Quit: Lost terminal)
[6:29] * trond (~trond@evil-server.alseth.info) has joined #ceph
[6:29] * Cube (~Cube@66.87.64.223) Quit (Quit: Leaving.)
[6:32] * janos_ (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) has joined #ceph
[6:33] * v2_ (~venky@ov42.x.rootbsd.net) has joined #ceph
[6:34] * dgarcia_ (~dgarcia@50-73-137-146-ip-static.hfc.comcastbusiness.net) has joined #ceph
[6:35] * trond (~trond@evil-server.alseth.info) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * bkopilov (~bkopilov@213.57.16.55) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * Discard_ (~discard@213-245-29-151.rev.numericable.fr) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * allig8r (~allig8r@128.135.219.116) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * kfei (~root@114-27-53-253.dynamic.hinet.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * v2 (~venky@ov42.x.rootbsd.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * rwheeler (~rwheeler@173.48.207.57) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * hflai (~hflai@alumni.cs.nctu.edu.tw) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * bitserker (~toni@63.pool85-52-240.static.orange.es) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * jsfrerot (~jsfrerot@192.222.132.57) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * mondkalbantrieb (~quassel@mondkalbantrieb.de) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * nhm (~nhm@65-128-152-189.mpls.qwest.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * imjustmatthew (~imjustmat@pool-74-110-226-158.rcmdva.fios.verizon.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * houkouonchi-home (~linux@pool-71-189-160-82.lsanca.fios.verizon.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * fretb (~fretb@drip.frederik.pw) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * terje (~joey@184-96-155-130.hlrn.qwest.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * wedge (lordsilenc@bigfoot.xh.se) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * magicrobotmonkey (~abassett@ec2-50-18-55-253.us-west-1.compute.amazonaws.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * saturnine (~saturnine@ashvm.saturne.in) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * winston-d (~ubuntu@ec2-54-244-213-72.us-west-2.compute.amazonaws.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * gleam (gleam@dolph.debacle.org) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * fouxm (~foucault@ks3363630.kimsufi.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * mongo (~gdahlman@voyage.voipnw.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * tnt_ (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * acaos (~zac@209-99-103-42.fwd.datafoundry.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * jackhill (~jackhill@bog.hcoop.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * Fetch (fetch@gimel.cepheid.org) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * blynch (~blynch@vm-nat.msi.umn.edu) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * Guest625 (~coyo@209.148.95.237) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * jksM- (~jks@3e6b5724.rev.stofanet.dk) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * rturk-away (~rturk@ds2390.dreamservers.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * stj (~stj@tully.csail.mit.edu) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * wer (~wer@206-248-239-142.unassigned.ntelos.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * joshwambua (~joshwambu@154.72.0.90) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * cronix1 (~cronix@5.199.139.166) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * lincolnb (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * dgarcia (~dgarcia@50-73-137-146-ip-static.hfc.comcastbusiness.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * janos (~messy@static-71-176-211-4.rcmdva.fios.verizon.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * carter (~carter@li98-136.members.linode.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * Meths (~meths@2.25.191.11) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * amospalla (~amospalla@amospalla.es) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * MK_FG (~MK_FG@00018720.user.oftc.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * purpleidea (~james@199.180.99.171) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * athrift (~nz_monkey@203.86.205.13) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * tank100 (~tank@84.200.17.138) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * Psi-Jack (~psi-jack@psi-jack.user.oftc.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * KindOne (kindone@0001a7db.user.oftc.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * [caveman] (~quassel@boxacle.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * tchmnkyz (~jeremy@0001638b.user.oftc.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * pmatulis (~peter@ec2-23-23-42-7.compute-1.amazonaws.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * zackc (~zackc@0001ba60.user.oftc.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * tcatm (~quassel@mneme.draic.info) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * SpamapS (~clint@xencbyrum2.srihosting.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * Azrael (~azrael@terra.negativeblue.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * sage__ (~quassel@gw.sepia.ceph.com) Quit (resistance.oftc.net oxygen.oftc.net)
[6:35] * KindOne (kindone@107.170.17.75) has joined #ceph
[6:35] * Azrael (~azrael@terra.negativeblue.com) has joined #ceph
[6:35] * carter (~carter@li98-136.members.linode.com) has joined #ceph
[6:35] * [cave] (~quassel@boxacle.net) has joined #ceph
[6:36] * jsfrerot (~jsfrerot@192.222.132.57) has joined #ceph
[6:36] * pmatulis (~peter@ec2-23-23-42-7.compute-1.amazonaws.com) has joined #ceph
[6:36] * gleam (gleam@dolph.debacle.org) has joined #ceph
[6:36] * rwheeler (~rwheeler@173.48.207.57) has joined #ceph
[6:36] * rdas (~rdas@121.244.87.115) has joined #ceph
[6:36] * nhm (~nhm@65-128-152-189.mpls.qwest.net) has joined #ceph
[6:36] * ChanServ sets mode +o nhm
[6:36] * zackc (~zackc@0001ba60.user.oftc.net) has joined #ceph
[6:36] * joef (~Adium@2601:9:2a00:690:c415:df3a:1394:653c) has joined #ceph
[6:36] * Azrael is now known as Guest2821
[6:36] * joef (~Adium@2601:9:2a00:690:c415:df3a:1394:653c) has left #ceph
[6:36] * stj (~stj@2001:470:8b2d:bb8:21d:9ff:fe29:8a6a) has joined #ceph
[6:36] * tcatm (~quassel@2a01:4f8:151:13c3:5054:ff:feff:cbce) has joined #ceph
[6:36] * acaos (~zac@209.99.103.42) has joined #ceph
[6:36] * analbeard (~shw@support.memset.com) Quit (Quit: Leaving.)
[6:36] * cronix1 (~cronix@5.199.139.166) has joined #ceph
[6:36] * jksM_ (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[6:37] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[6:37] * trond (~trond@evil-server.alseth.info) has joined #ceph
[6:37] * fretb (~fretb@drip.frederik.pw) has joined #ceph
[6:37] * gdavis33 (~gdavis@38.122.12.254) has joined #ceph
[6:37] * purpleidea (~james@199.180.99.171) has joined #ceph
[6:37] * amospalla (~amospalla@0001a39c.user.oftc.net) has joined #ceph
[6:38] * imjustmatthew (~imjustmat@pool-74-110-226-158.rcmdva.fios.verizon.net) has joined #ceph
[6:38] * scuttle|afk (~scuttle@nat-pool-rdu-t.redhat.com) has joined #ceph
[6:38] * Meths (~meths@2.25.191.11) has joined #ceph
[6:38] * AfC1 (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[6:42] * Psi-Jack (~psi-jack@psi-jack.user.oftc.net) has joined #ceph
[6:46] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[6:46] * bkopilov (~bkopilov@213.57.16.55) has joined #ceph
[6:46] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) has joined #ceph
[6:46] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[6:46] * Discard_ (~discard@213-245-29-151.rev.numericable.fr) has joined #ceph
[6:46] * allig8r (~allig8r@128.135.219.116) has joined #ceph
[6:46] * kfei (~root@114-27-53-253.dynamic.hinet.net) has joined #ceph
[6:46] * mtl1 (~Adium@c-67-174-109-212.hsd1.co.comcast.net) has joined #ceph
[6:46] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[6:46] * hflai (~hflai@alumni.cs.nctu.edu.tw) has joined #ceph
[6:46] * bitserker (~toni@63.pool85-52-240.static.orange.es) has joined #ceph
[6:46] * mondkalbantrieb (~quassel@mondkalbantrieb.de) has joined #ceph
[6:46] * houkouonchi-home (~linux@pool-71-189-160-82.lsanca.fios.verizon.net) has joined #ceph
[6:46] * ron-slc (~Ron@173-165-129-125-utah.hfc.comcastbusiness.net) has joined #ceph
[6:46] * sage__ (~quassel@gw.sepia.ceph.com) has joined #ceph
[6:46] * lincolnb (~lincoln@c-67-165-142-226.hsd1.il.comcast.net) has joined #ceph
[6:46] * joshwambua (~joshwambu@154.72.0.90) has joined #ceph
[6:46] * wer (~wer@206-248-239-142.unassigned.ntelos.net) has joined #ceph
[6:46] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[6:46] * Karcaw (~evan@71-95-122-38.dhcp.mdfd.or.charter.com) has joined #ceph
[6:46] * lkoranda (~lkoranda@nat-pool-brq-t.redhat.com) has joined #ceph
[6:46] * xdeller (~xdeller@h195-91-128-218.ln.rinet.ru) has joined #ceph
[6:46] * rturk-away (~rturk@ds2390.dreamservers.com) has joined #ceph
[6:46] * Guest625 (~coyo@209.148.95.237) has joined #ceph
[6:46] * athrift (~nz_monkey@203.86.205.13) has joined #ceph
[6:46] * terje (~joey@184-96-155-130.hlrn.qwest.net) has joined #ceph
[6:46] * wedge (lordsilenc@bigfoot.xh.se) has joined #ceph
[6:46] * MK_FG (~MK_FG@00018720.user.oftc.net) has joined #ceph
[6:46] * magicrobotmonkey (~abassett@ec2-50-18-55-253.us-west-1.compute.amazonaws.com) has joined #ceph
[6:46] * saturnine (~saturnine@ashvm.saturne.in) has joined #ceph
[6:46] * winston-d (~ubuntu@ec2-54-244-213-72.us-west-2.compute.amazonaws.com) has joined #ceph
[6:46] * tchmnkyz (~jeremy@0001638b.user.oftc.net) has joined #ceph
[6:46] * SpamapS (~clint@xencbyrum2.srihosting.com) has joined #ceph
[6:46] * fouxm (~foucault@ks3363630.kimsufi.com) has joined #ceph
[6:46] * bens (~ben@c-71-231-52-111.hsd1.wa.comcast.net) has joined #ceph
[6:46] * mongo (~gdahlman@voyage.voipnw.net) has joined #ceph
[6:46] * tnt_ (~tnt@ec2-54-200-98-43.us-west-2.compute.amazonaws.com) has joined #ceph
[6:46] * jackhill (~jackhill@bog.hcoop.net) has joined #ceph
[6:46] * tank100 (~tank@84.200.17.138) has joined #ceph
[6:46] * Fetch (fetch@gimel.cepheid.org) has joined #ceph
[6:46] * blynch (~blynch@vm-nat.msi.umn.edu) has joined #ceph
[6:47] * sage__ (~quassel@gw.sepia.ceph.com) Quit (Remote host closed the connection)
[6:47] * vbellur (~vijay@121.244.87.124) has joined #ceph
[6:53] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[7:01] * jrcresawn (~jrcresawn@ip24-251-38-21.ph.ph.cox.net) has joined #ceph
[7:03] * jrcresawn (~jrcresawn@ip24-251-38-21.ph.ph.cox.net) Quit (Remote host closed the connection)
[7:10] * AfC (~andrew@203.191.203.202) has joined #ceph
[7:10] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[7:11] * saurabh (~saurabh@121.244.87.117) has joined #ceph
[7:15] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[7:17] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:18] * baylight (~tbayly@204.15.85.169) has joined #ceph
[7:18] * baylight (~tbayly@204.15.85.169) has left #ceph
[7:26] * AfC (~andrew@203.191.203.202) Quit (Quit: Leaving.)
[7:26] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[7:27] * AfC (~andrew@203.191.203.202) has joined #ceph
[7:27] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit ()
[7:28] * michalefty (~micha@p20030071CF50C60085D9CF4F802FAE44.dip0.t-ipconnect.de) has joined #ceph
[7:34] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[7:35] * b0e (~aledermue@x2f36821.dyn.telefonica.de) has joined #ceph
[7:37] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) Quit (Ping timeout: 480 seconds)
[7:39] * sjm (~sjm@pool-72-76-115-220.nwrknj.fios.verizon.net) has left #ceph
[7:44] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[7:45] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[7:46] * lalatenduM (~lalatendu@121.244.87.117) has joined #ceph
[7:46] * lucas1 (~Thunderbi@218.76.25.66) Quit ()
[7:47] * b0e (~aledermue@x2f36821.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[7:47] * bkopilov (~bkopilov@213.57.16.55) Quit (Read error: Operation timed out)
[7:51] * Hell_Fire (~HellFire@123-243-155-184.static.tpgi.com.au) Quit (Read error: Connection reset by peer)
[7:54] * Hell_Fire (~HellFire@123-243-155-184.static.tpgi.com.au) has joined #ceph
[7:56] * ponyofdeath (~vladi@cpe-66-27-98-26.san.res.rr.com) Quit (Quit: leaving)
[7:59] * ponyofdeath (~vladi@cpe-66-27-98-26.san.res.rr.com) has joined #ceph
[8:05] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) Quit (Quit: Leaving.)
[8:12] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[8:22] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) has joined #ceph
[8:24] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[8:25] * sputnik1_ (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit ()
[8:30] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[8:37] * thomnico (~thomnico@2a01:e35:8b41:120:5b7:24b1:2fe7:700f) has joined #ceph
[8:38] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[8:39] * rendar (~I@host217-119-dynamic.53-82-r.retail.telecomitalia.it) has joined #ceph
[8:39] * thb (~me@port-23619.pppoe.wtnet.de) has joined #ceph
[8:39] * huangjun (~kvirc@59.173.185.197) Quit (Read error: Connection reset by peer)
[8:40] * huangjun (~kvirc@59.173.185.197) has joined #ceph
[8:42] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) has joined #ceph
[8:45] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[8:48] * AfC (~andrew@203.191.203.202) Quit (Ping timeout: 480 seconds)
[8:50] * huangjun (~kvirc@59.173.185.197) Quit (Read error: Connection reset by peer)
[8:51] * huangjun (~kvirc@59.173.185.197) has joined #ceph
[8:51] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[8:51] * Sysadmin88 (~IceChat77@94.4.20.0) Quit (Quit: Friends help you move. Real friends help you move bodies.)
[8:51] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[8:55] * vbellur (~vijay@121.244.87.117) has joined #ceph
[8:59] * zack_dol_ (~textual@e0109-114-22-13-18.uqwimax.jp) has joined #ceph
[8:59] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Ping timeout: 480 seconds)
[9:00] * ponyofdeath (~vladi@cpe-66-27-98-26.san.res.rr.com) Quit (Ping timeout: 480 seconds)
[9:00] * DV__ (~veillard@veillard.com) Quit (Remote host closed the connection)
[9:00] * DV__ (~veillard@2001:41d0:1:d478::1) has joined #ceph
[9:02] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[9:03] * zack_dolby (~textual@e0109-114-22-13-18.uqwimax.jp) Quit (Ping timeout: 480 seconds)
[9:03] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[9:04] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:04] * dvanders (~dvanders@2001:1458:202:180::102:f6c7) has joined #ceph
[9:07] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[9:07] * dignus (~jkooijman@t-x.dignus.nl) has joined #ceph
[9:08] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) has joined #ceph
[9:13] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[9:13] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[9:13] * ChanServ sets mode +v andreask
[9:16] * mizal (~mizal@ip70-187-179-66.oc.oc.cox.net) has joined #ceph
[9:18] * vbellur (~vijay@121.244.87.124) has joined #ceph
[9:19] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[9:20] <mizal> Hi Newbie here. Does anyone have encountered failed to load plugin using profile default when creating EC pool?
[9:21] * simulx (~simulx@66-194-114-178.static.twtelecom.net) Quit (Quit: Nettalk6 - www.ntalk.de)
[9:22] * zack_dol_ (~textual@e0109-114-22-13-18.uqwimax.jp) Quit (Read error: No route to host)
[9:22] * zack_dolby (~textual@e0109-114-22-13-18.uqwimax.jp) has joined #ceph
[9:23] * odyssey4me (~odyssey4m@165.233.205.190) has joined #ceph
[9:23] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[9:24] * odyssey4me (~odyssey4m@165.233.205.190) Quit ()
[9:24] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[9:28] * ikrstic (~ikrstic@109-92-251-185.dynamic.isp.telekom.rs) has joined #ceph
[9:32] * lcavassa (~lcavassa@89.184.114.246) has joined #ceph
[9:37] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[9:39] * fsimonce (~simon@host50-69-dynamic.46-79-r.retail.telecomitalia.it) has joined #ceph
[9:40] * mizal (~mizal@ip70-187-179-66.oc.oc.cox.net) Quit (Remote host closed the connection)
[9:40] * hyperbaba__ (~hyperbaba@private.neobee.net) has joined #ceph
[9:43] * ikrstic (~ikrstic@109-92-251-185.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[9:45] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) Quit (Quit: vilobhmm)
[9:50] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[9:50] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[9:51] * MACscr (~Adium@c-50-158-183-38.hsd1.il.comcast.net) has joined #ceph
[9:53] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[9:57] * ghartz (~ghartz@ircad17.u-strasbg.fr) Quit (Remote host closed the connection)
[10:04] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[10:04] * ghartz (~ghartz@ircad17.u-strasbg.fr) has joined #ceph
[10:04] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[10:05] * oomkiller (oomkiller@d.clients.kiwiirc.com) has joined #ceph
[10:05] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[10:06] * andreask (~andreask@zid-vpnn093.uibk.ac.at) has joined #ceph
[10:06] * ChanServ sets mode +v andreask
[10:07] * AfC (~andrew@nat-gw2.syd4.anchor.net.au) Quit (Quit: Leaving.)
[10:09] * flaxy (~afx@78.130.171.69) Quit (Ping timeout: 480 seconds)
[10:10] * dneary (~dneary@87-231-145-225.rev.numericable.fr) has joined #ceph
[10:11] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[10:12] * shang (~ShangWu@175.41.48.77) Quit (Ping timeout: 480 seconds)
[10:17] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[10:18] * TMM (~hp@178-84-46-106.dynamic.upc.nl) Quit (Ping timeout: 480 seconds)
[10:21] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[10:21] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[10:21] * vbellur (~vijay@121.244.87.117) has joined #ceph
[10:24] * zidarsk8 (~zidar@46.54.226.50) has joined #ceph
[10:24] * zidarsk8 (~zidar@46.54.226.50) has left #ceph
[10:27] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[10:34] * shang (~ShangWu@175.41.48.77) has joined #ceph
[10:41] * sz0 (~sz0@46.197.39.119) has joined #ceph
[10:45] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:46] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Read error: Operation timed out)
[10:47] * vbellur (~vijay@121.244.87.117) Quit (Read error: Operation timed out)
[10:54] * lucas1 (~Thunderbi@218.76.25.66) has joined #ceph
[10:56] * sz0_ (~sz0@162.211.179.43) has joined #ceph
[10:57] * sz0 (~sz0@46.197.39.119) Quit (Remote host closed the connection)
[10:58] * aldavud (~aldavud@213.55.184.137) has joined #ceph
[10:59] * dvanders_ (~dvanders@2001:1458:202:f4::101:f6c7) has joined #ceph
[11:01] * dvanders (~dvanders@2001:1458:202:180::102:f6c7) Quit (Read error: Connection reset by peer)
[11:01] * vbellur (~vijay@121.244.87.124) has joined #ceph
[11:03] * andreask (~andreask@zid-vpnn093.uibk.ac.at) has left #ceph
[11:03] * dneary (~dneary@87-231-145-225.rev.numericable.fr) Quit (Ping timeout: 480 seconds)
[11:04] * sz0_ (~sz0@162.211.179.43) Quit (Ping timeout: 480 seconds)
[11:07] * dvanders_ (~dvanders@2001:1458:202:f4::101:f6c7) Quit (Ping timeout: 480 seconds)
[11:09] * dvanders (~dvanders@2001:1458:202:180::102:f6c7) has joined #ceph
[11:15] * danieljh (~daniel@0001b4e9.user.oftc.net) Quit (Quit: Lost terminal)
[11:20] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[11:33] * WintermeW_ (~WintermeW@212-83-158-61.rev.poneytelecom.eu) Quit (Quit: leaving)
[11:33] * aldavud (~aldavud@213.55.184.137) Quit (Read error: Connection reset by peer)
[11:34] * jordanP (~jordan@2a04:2500:0:b00:7922:ada3:b4fe:27e8) has joined #ceph
[11:35] * zack_dolby (~textual@e0109-114-22-13-18.uqwimax.jp) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[11:36] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[11:37] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[11:37] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[11:37] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[11:38] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[11:39] * LeaChim (~LeaChim@host86-161-90-156.range86-161.btcentralplus.com) has joined #ceph
[11:42] * cookednoodles (~eoin@eoin.clanslots.com) Quit (Remote host closed the connection)
[11:42] * ScOut3R_ (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[11:43] * cookednoodles (~eoin@eoin.clanslots.com) has joined #ceph
[11:43] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[11:44] * dvanders (~dvanders@2001:1458:202:180::102:f6c7) Quit (Ping timeout: 480 seconds)
[11:44] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Read error: Operation timed out)
[11:46] * jordanP (~jordan@2a04:2500:0:b00:7922:ada3:b4fe:27e8) Quit (Ping timeout: 480 seconds)
[11:46] * ScOut3R_ (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Remote host closed the connection)
[11:47] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) has joined #ceph
[11:48] * vbellur (~vijay@121.244.87.124) Quit (Read error: Operation timed out)
[11:50] * dneary (~dneary@87-231-145-225.rev.numericable.fr) has joined #ceph
[11:55] * jordanP (~jordan@185.23.92.11) has joined #ceph
[11:55] * lucas1 (~Thunderbi@218.76.25.66) Quit (Quit: lucas1)
[11:56] * brunoleon (~quassel@ARennes-658-1-5-66.w83-199.abo.wanadoo.fr) has joined #ceph
[12:02] * zack_dolby (~textual@e0109-114-22-13-18.uqwimax.jp) has joined #ceph
[12:02] * zack_dolby (~textual@e0109-114-22-13-18.uqwimax.jp) Quit ()
[12:03] * vbellur (~vijay@121.244.87.124) has joined #ceph
[12:03] * theanalyst (~abhi@49.32.3.36) Quit (Ping timeout: 480 seconds)
[12:05] * ScOut3R (~ScOut3R@catv-89-133-22-210.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[12:06] * lupu (~lupu@86.107.101.214) has joined #ceph
[12:07] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[12:09] * flaxy (~afx@78.130.171.69) has joined #ceph
[12:09] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) has joined #ceph
[12:15] * yuriw (~Adium@c-76-126-35-111.hsd1.ca.comcast.net) has joined #ceph
[12:15] * dneary (~dneary@87-231-145-225.rev.numericable.fr) Quit (Read error: Operation timed out)
[12:16] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[12:19] * stephan (~stephan@62.217.45.26) has joined #ceph
[12:25] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[12:30] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:33] * markbby (~Adium@168.94.245.4) has joined #ceph
[12:34] * brunoleon (~quassel@ARennes-658-1-5-66.w83-199.abo.wanadoo.fr) Quit (Remote host closed the connection)
[12:34] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[12:34] * brunoleon (~quassel@ARennes-658-1-5-66.w83-199.abo.wanadoo.fr) has joined #ceph
[12:34] * brunoleon (~quassel@ARennes-658-1-5-66.w83-199.abo.wanadoo.fr) Quit (Remote host closed the connection)
[12:35] * brunoleon (~quassel@ARennes-658-1-5-66.w83-199.abo.wanadoo.fr) has joined #ceph
[12:36] * markbby (~Adium@168.94.245.4) has joined #ceph
[12:43] * huangjun (~kvirc@59.173.185.197) Quit (Read error: Operation timed out)
[12:44] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Ping timeout: 480 seconds)
[12:46] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[12:51] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[12:54] * shang (~ShangWu@175.41.48.77) Quit (Read error: Operation timed out)
[12:55] * jtaguinerd (~jtaguiner@103.14.60.99) has joined #ceph
[12:57] * zack_dolby (~textual@p67f6b6.tokynt01.ap.so-net.ne.jp) has joined #ceph
[12:58] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[12:58] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) has joined #ceph
[12:59] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) Quit ()
[13:00] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) has joined #ceph
[13:02] * lucas1 (~Thunderbi@222.247.57.50) has joined #ceph
[13:07] * lucas1 (~Thunderbi@222.247.57.50) Quit ()
[13:18] * saurabh (~saurabh@121.244.87.117) Quit (Quit: Leaving)
[13:21] * zhaochao (~zhaochao@106.38.204.72) has left #ceph
[13:26] * vbellur (~vijay@121.244.87.124) Quit (Ping timeout: 480 seconds)
[13:34] * theanalyst (~abhi@49.32.3.8) has joined #ceph
[13:35] * vbellur (~vijay@121.244.87.117) has joined #ceph
[13:40] <oomkiller> is it possible to copy a snapshot (flattened or not) to a different pool (ie for backup purposes) ?
[13:40] <cookednoodles> I presume you can do a qemu-img convert
[13:42] <oomkiller> export it and import it again, or what do you mean?
[13:42] * diegows (~diegows@190.190.5.238) has joined #ceph
[13:44] <oomkiller> is it possible by protected the snap, then clone the snap to one in the other pool, then flatten the snap in the new pool, and then unprotect the originating snap?
[13:44] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[13:48] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[13:55] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[13:56] <brunoleon> hello. I'm still trying to get an active+clean status f my PG but cannotfigure out how.
[13:56] <brunoleon> must be missing something big but any help would be greatly appreciated
[13:57] <brunoleon> config is 3 mon + 3 osd, all on one host BUT on different VMs
[13:57] <brunoleon> mon(s) can see everybody, but PG stays incomplete
[14:06] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[14:10] * sz0 (~sz0@46.197.39.119) has joined #ceph
[14:15] * dlan (~dennis@116.228.88.131) Quit (Remote host closed the connection)
[14:15] * dlan (~dennis@116.228.88.131) has joined #ceph
[14:17] * sz0 (~sz0@46.197.39.119) Quit (Read error: Operation timed out)
[14:18] * theanalyst (~abhi@49.32.3.8) Quit (Ping timeout: 480 seconds)
[14:22] * sz0 (~sz0@server-176.53.12.97.as42926.net) has joined #ceph
[14:26] <janos_> can all the osd host vm's see each other?
[14:33] * CephFan1 (~textual@68-233-224-175.static.hvvc.us) has joined #ceph
[14:33] * sz0_ (~sz0@46.197.39.119) has joined #ceph
[14:34] * sz0 (~sz0@server-176.53.12.97.as42926.net) Quit (Ping timeout: 480 seconds)
[14:34] * KevinPerks (~Adium@cpe-174-098-096-200.triad.res.rr.com) has joined #ceph
[14:40] <brunoleon> how do you check that in ceph terms ?
[14:42] * vbellur1 (~vijay@121.244.87.124) has joined #ceph
[14:42] <brunoleon> i mean I know they can "see" each other regarding the network
[14:43] * vbellur1 (~vijay@121.244.87.124) Quit ()
[14:43] <brunoleon> but is this a must for them to be declared in each others' ceph.conf ?
[14:44] * vbellur (~vijay@121.244.87.117) Quit (Ping timeout: 480 seconds)
[14:57] <janos_> they should have port range 6800-7100 open iirc, it's been a while
[14:57] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[14:57] <janos_> and should all be part of the same cluster. i haven't used ceph-deploy or any newer methods, so mine are all listed in ceph.conf
[14:57] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[14:59] * huangjun (~kvirc@117.151.46.118) has joined #ceph
[15:00] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[15:00] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[15:03] * michalefty (~micha@p20030071CF50C60085D9CF4F802FAE44.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[15:05] <mo-> Im noticing highly fluctuating write rates in ceph -w while doing a consistent write and was wondering whether thats normal or indicating a bottleneck somewhere
[15:05] * baylight (~tbayly@204.15.85.169) has joined #ceph
[15:06] <mo-> because Ive only ever seen it behave this way but never gave it any kind of thought
[15:07] * danieagle (~Daniel@179.184.165.184.static.gvt.net.br) has joined #ceph
[15:09] * flaxy (~afx@78.130.171.69) Quit (Ping timeout: 480 seconds)
[15:11] * flaxy (~afx@78.130.171.69) has joined #ceph
[15:12] * scuttle|afk is now known as scuttlemonkey
[15:13] * jtaguinerd1 (~jtaguiner@112.205.13.239) has joined #ceph
[15:14] * rwheeler (~rwheeler@173.48.207.57) Quit (Quit: Leaving)
[15:14] * vbellur (~vijay@122.167.231.9) has joined #ceph
[15:15] * huangjun (~kvirc@117.151.46.118) Quit (Read error: Connection reset by peer)
[15:17] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[15:18] * jtaguinerd (~jtaguiner@103.14.60.99) Quit (Ping timeout: 480 seconds)
[15:25] * brad_mssw (~brad@shop.monetra.com) has joined #ceph
[15:25] * dneary (~dneary@87-231-145-225.rev.numericable.fr) has joined #ceph
[15:25] <mo-> like... the wr might say 180MB/s and the next message would have 8000 kB/s (ish), always alternating
[15:35] * vmx (~vmx@p508A4C25.dip0.t-ipconnect.de) has joined #ceph
[15:35] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) has joined #ceph
[15:38] * shang (~ShangWu@1-162-70-181.dynamic.hinet.net) has joined #ceph
[15:39] * baylight (~tbayly@204.15.85.169) Quit (Ping timeout: 480 seconds)
[15:42] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:45] * hyperbaba__ (~hyperbaba@private.neobee.net) Quit (Ping timeout: 480 seconds)
[15:48] * markbby (~Adium@168.94.245.4) has joined #ceph
[15:52] * guerby (~guerby@ip165-ipv6.tetaneutral.net) Quit (Quit: Leaving)
[15:52] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[15:52] * guerby (~guerby@ip165-ipv6.tetaneutral.net) has joined #ceph
[15:57] <jnq> I've just upgraded from 0.56.7 to 0.61.9 on Ubuntu 12.04. Having some trouble getting my monitors to start now, can anyone lend a hand?
[16:00] <jnq> or possibly just shed some light...
[16:10] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[16:10] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Ping timeout: 480 seconds)
[16:11] <tnt_> jnq: did you upgrade all the mons ?
[16:11] * Guest2821 is now known as Azrael
[16:11] <tnt_> jnq: during the first boot they will take a significant amount of time to boot because of the internal format conversion ... do not kill them ...
[16:13] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) has joined #ceph
[16:14] <jnq> they don't even start, i just get this
[16:14] <jnq> failed: 'ulimit -n 8192; /usr/bin/ceph-mon -i b --pid-file /var/run/ceph/mon.b.pid -c /etc/ceph/ceph.conf '
[16:14] <jnq> nothing more in the log
[16:15] * rdas (~rdas@121.244.87.115) Quit (Quit: Leaving)
[16:15] <joao> that's probably just the ulimit failing
[16:15] * jtaguinerd1 (~jtaguiner@112.205.13.239) Quit (Read error: Connection reset by peer)
[16:15] * jtaguinerd (~jtaguiner@112.205.13.239) has joined #ceph
[16:15] <joao> you'll need to adjust the hard cap for file descriptors on your system
[16:16] <joao> have you checked 'ps' to see if the mon is running?
[16:16] <jnq> aye, not running. pretty embarrasing if that's what is stopping it
[16:18] <joao> shouldn't
[16:19] <joao> try running the monitor manually, add '--debug-mon 10' to the command, tail the log to see if anything pops up
[16:20] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[16:22] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) has joined #ceph
[16:24] * theanalyst (~abhi@49.32.3.8) has joined #ceph
[16:24] <jnq> joao: still nothing, just https://p.6core.net/p/yqTpAEWIcBPHALYzyajEetPj
[16:24] <jnq> no debug...
[16:25] * markl (~mark@knm.org) has joined #ceph
[16:25] <joao> and then it doesn't keep on running?
[16:26] <jnq> nope, just end up with a spinning ceph-create-keys
[16:26] <joao> jnq, let's try this: ceph-mon -i b --debug-mon 20 -d 2>&1 | tee -a foo.log
[16:27] <joao> -d keeps the process attached
[16:27] <joao> (to the shell)
[16:27] <joao> when weirdness knocks this usually helps
[16:28] <mo-> be aware that you may not see any output after the first line (with the PID in it) for a good few minutes
[16:28] <jnq> ahh, ok
[16:28] <joao> is that something that happens?
[16:28] <jnq> https://p.6core.net/p/4IYkj8BYzZPBhWciVCPcQM6k
[16:29] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[16:29] <mo-> youre not root or the mon is already running
[16:29] <joao> jnq, 'sudo' ?
[16:29] <joao> mo-, mon running would probably result in an EBUSY
[16:29] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[16:29] <jnq> i'm having a bad day. sorry.
[16:29] <jnq> just get "unable to read magic from mon data.. did you run mkcephfs?" now
[16:30] <joao> well, did you?
[16:30] <joao> are the mons in a default location?
[16:30] <jnq> i ran it on my 0.56.7 cluster & they are in the default locaiton
[16:31] <joao> ls /var/lib/ceph/mon/ceph-b
[16:31] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[16:31] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[16:31] * swills` (~swills@mouf.net) has left #ceph
[16:31] <jnq> https://p.6core.net/p/8TQHgD5pnBgFgjDZVkvPPoac
[16:32] <joao> mv /var/lib/ceph/mon/ceph-b/store.db /somepath/store.db
[16:33] <joao> then add 'debug mon = 10' to your ceph.conf, restart the monitor with upstart and keep an eye on the log
[16:33] <joao> store will convert, which may take a few seconds to several minutes (depending on how big your store is)
[16:33] <joao> after that the monitor should run fine
[16:33] <jnq> ok
[16:34] <jnq> great
[16:34] <jnq> thanks for that, i owe you a beer or something.
[16:34] <mo-> just so I get this, the store.db folder is obviously the new format, so youre moving it away so that it restarts the conversion from the old format, right?
[16:34] * simulx (~simulx@vpn.expressionanalysis.com) has joined #ceph
[16:34] <joao> mo-, yes
[16:35] <joao> if the monitor "sees" store.db in the mon data dir will assume a conversion already happened
[16:35] <joao> not really, but sort of
[16:35] <joao> moving it away will force the conversion yet again
[16:36] <mo-> got it
[16:36] <joao> the mon will also check if the store has a given flag in there marking an 'on-going' conversion, and if so will let the user know
[16:36] <joao> however, there's a window between firing up the mon and starting the conversion that may end up with a completely empty store (which I'm assuming was the case here)
[16:37] <mo-> being impatient with mon startups had caused me trouble before too, ha
[16:37] * bkopilov (~bkopilov@nat-pool-tlv-t.redhat.com) Quit (Read error: Operation timed out)
[16:37] <joao> obviously the store will only end up empty if the mon died for some reason (kills are usually the culprit) before conversion started
[16:38] <joao> yes, nowadays this is by far the most common issue with upgrading from bobtail :)
[16:38] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[16:38] <mo-> happened to me with a cuttlefish cluster, but thats still "earlier than dumpling" so... same thing ;)
[16:39] <mo-> if you don't mind tho, youve probably seen way more clusters than I have. I had posed this question earlier:
[16:40] <mo-> is it normal for the write rates in "ceph -w" to highly fluctuate between 180ish MB/s and 1-8 MB/s on every other message (or with 1 in 3)
[16:40] <mo-> (with a constant write going on in the form of a simple dd)
[16:41] <joao> I don't think so
[16:42] <joao> also, you're a brave one if you actually went through cuttlefish
[16:42] <mo-> I was wondering whether thatd indicate a bottleneck somewhere, which is entirely possible (inhomogenous hardware)
[16:43] <joao> we should have page dedicated to the brave souls that went through cuttlefish and came on the other side
[16:43] <joao> heterogeneous? :)
[16:43] <mo-> ha, you may not remember, but we had talked about that before. wasnt my choice, I got contacted by a guy that had installed a ceph cluster when cuttlefish was recent and he never did anything to it afterwards
[16:43] <mo-> yea see, thats the word!
[16:43] <tnt_> joao: heh, my prod cluster was created on Argonaut and went through it all :p
[16:44] <joao> tnt_, there are more than a few :p
[16:44] <janos_> yeah i got bit by being impatient on the mon store conversion
[16:44] * toMeloos (~toMeloos@82.201.93.194) has joined #ceph
[16:44] <mo-> the cluster is really kinda weird. its consisting of 2 "normal" boxes with 2TB disks and such, 1 box with SAS disks and 2 boxes with SCSI disks
[16:45] <mo-> out of all those, for some inexplicable reason, the SAS disks are the slowest by far (60MB/s per disk)
[16:45] <mo-> might be the controller, but yea...
[16:46] <darkfader> the scsi box has a raid controller (w/cache), the sas doesn't (but has writecache disabled like any non-desktop disk) and the sata disks have cache on because shit? :)
[16:46] * markbby (~Adium@168.94.245.4) Quit (Remote host closed the connection)
[16:46] * sz0_ (~sz0@46.197.39.119) Quit (Quit: My iMac has gone to sleep. ZZZzzz???)
[16:47] <mo-> actually, they all have controllers with caches iirc
[16:47] <mo-> so its all single-disk RAID0s
[16:47] * xarses (~andreww@c-76-126-112-92.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:47] <mo-> to make things more fun
[16:47] <darkfader> hmm, then the sas one would need to be misconfigured. weird for sure :)
[16:48] <mo-> the 2 new boxes have SSD journals, the SAS box has SSD journals, and the SCSI boxes have HDD journals (different HDD)
[16:49] <mo-> upping the network bandwidth of the scsi boxes to 2gbit has helped immensely with throughput, but the rates are still fluctuating
[16:50] <mo-> I wonder if these 2gbit could still be the limiting factor
[16:50] <toMeloos> Hi guys, quick very basic question. When using ceph caching tier, should the client connect to the caching pool or the regular pool?
[16:50] <gleam> caching
[16:50] <mo-> regular, if youve set up overlaying
[16:50] <gleam> wat
[16:51] <gleam> my mistake
[16:51] <toMeloos> hehe
[16:51] <toMeloos> but overlay is for writethrough only right?
[16:52] <mo-> not that im aware of, we experimented with cache pools in writeback mode
[16:52] * shang (~ShangWu@1-162-70-181.dynamic.hinet.net) Quit (Read error: Operation timed out)
[16:52] * theanalyst (~abhi@49.32.3.8) Quit (Ping timeout: 480 seconds)
[16:52] <mo-> we had to give up on those tests because the cache wouldnt heed the limits set and would just fill up (100%) and deadlock
[16:53] <toMeloos> sorry i meant writeback
[16:53] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[16:53] <toMeloos> so when using a writeback caching pool the client has to connect to the regular "backend" pool
[16:54] <toMeloos> and when using readonly caching?
[16:54] <mo-> yea, no chance in the client config required, apart from an upgrade in ceph-common / librbd2
[16:54] <mo-> *change
[16:55] <mo-> anyways, revising the weird cluster, all the SAS and SCSI boxes each have 2GBit for cluster and 2GBit for client communitation, each in 802.4ad bonding
[16:55] <toMeloos> cool so the client should always just connect to the regular backend pool and ceph will handle the caching transparently?
[16:56] <pressureman> 802.3ad
[16:56] <mo-> whoops, yea, typo
[16:56] <pressureman> one never knows, with IEEE ;-)
[16:56] <mo-> wondering if thats whats causing the fluctuations
[16:57] <toMeloos> thanks
[16:59] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) has joined #ceph
[17:00] * rturk|afk is now known as rturk
[17:04] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) has joined #ceph
[17:04] * KaZeR (~kazer@64.201.252.132) has joined #ceph
[17:07] * gdavis33 (~gdavis@38.122.12.254) has left #ceph
[17:07] * hyperbaba (~hyperbaba@80.74.175.250) has joined #ceph
[17:08] <mo-> weve been checking the network load with iptraf and that NEVER showed a port utilization that even came close to 90%, which seems odd
[17:08] <mo-> because doubling the bandwidth actually gave us more ceph speed
[17:12] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[17:14] * kevinc (~kevinc__@client65-40.sdsc.edu) has joined #ceph
[17:14] * adamcrume (~quassel@50.247.81.99) has joined #ceph
[17:19] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[17:19] * ponyofdeath (~vladi@cpe-66-27-98-26.san.res.rr.com) has joined #ceph
[17:19] * linuxkidd_ (~linuxkidd@2001:420:2280:1272:b0bd:57cb:f985:4d95) has joined #ceph
[17:20] * baylight (~tbayly@74-220-196-40.unifiedlayer.com) has joined #ceph
[17:20] * markbby (~Adium@168.94.245.3) has joined #ceph
[17:22] * BManojlovic (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:23] * i_m1 (~ivan.miro@gbibp9ph1--blueice2n2.emea.ibm.com) has joined #ceph
[17:23] * xarses (~andreww@12.164.168.117) has joined #ceph
[17:25] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) Quit (Read error: Operation timed out)
[17:27] <mo-> this is the ceph -w while a dd with bs=512M is running from inside a VM: https://p.6core.net/p/aFa8pIb9OFdtBUUXQAaUuBXZ first I only put the truncated lines for readability, followed by the original, full lines
[17:29] <mo-> could this be an artifact of client-side rbd caching maybe? I dont think oflag=direct inside a VM would bypass rbd caching
[17:29] <mo-> dd ends up showing 102MB/s which is tollerable, but not great
[17:29] * danieljh (~daniel@0001b4e9.user.oftc.net) has joined #ceph
[17:30] * madkiss (~madkiss@chello084112018119.27.11.vie.surfer.at) has joined #ceph
[17:30] * analbeard (~shw@host86-155-107-195.range86-155.btcentralplus.com) has joined #ceph
[17:30] * narb (~Jeff@38.99.52.10) has joined #ceph
[17:33] * toMeloos (~toMeloos@82.201.93.194) Quit (Quit: Ik ga weg)
[17:33] * madkiss (~madkiss@chello084112018119.27.11.vie.surfer.at) Quit ()
[17:46] <mo-> maybe Im over-simplifying things, but if you input 100mb/s to a box with 2GBit cluster bandwidth and pool size 3, it has to send 2 duplicates out (i.e. 200mb/s), which might be saturating the cluster network bandwidth... no?
[17:47] <iggy> yes, 100MB/s is good for gigabit networking
[17:47] <mo-> well its 2GBit networking
[17:48] <mo-> for 1GBit, itd obviously be capped from that
[17:48] <iggy> most people with single gigabit seem to get about 60-70MB/s
[17:48] <mo-> the boxes have 4 interfaces. if I were to only have 1 if towards the client, Id have a 100MB/s bottleneck there, even if the cluster network would have 3GBit then
[17:49] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Read error: Operation timed out)
[17:50] <iggy> thank goodness 10G eth is coming down in price every day
[17:50] <mo-> whats a good ratio between client and cluster network bw then? this would obviously be dependant on the pool size... is there a formular or best practice?
[17:50] <mo-> it is, but its still very far from being reasonable (10 GE that is)
[17:51] * analbeard (~shw@host86-155-107-195.range86-155.btcentralplus.com) Quit (Quit: Leaving.)
[17:51] * hyperbaba (~hyperbaba@80.74.175.250) Quit (Ping timeout: 480 seconds)
[17:52] <brad_mssw> channel bonding doesn't help with single-stream, or single host to host because most algorithms choose the port based on destination MAC or IP address ... so you don't get aggregate bandwidth
[17:52] <brad_mssw> at least with lacp (802.3ad)
[17:53] <mo-> hu? thats news to me. thought lacp would just allow you to add up the bandwidth
[17:53] <iggy> it _seems_ like most people aren't bothering with separate networks these days... just bonding everything together and hoping for the best
[17:53] <brad_mssw> mo-: no definitely not
[17:53] <mo-> iggy: that doesnt sound great ;)
[17:53] <mo-> that sounds like I might want to run network bandwidth tests with iperf then
[17:54] <iggy> and what brad said... about single host to host
[17:54] <brad_mssw> mo-: that's the total available bandwidth, but you'd need multiple hosts communicating with you and be 'lucky' enough for the algorithms to have chosen different ports
[17:54] <mo-> well
[17:54] <mo-> the ceph cluster has 5 hosts though
[17:54] <brad_mssw> to get 2Gbits/sec
[17:54] <mo-> so its not single host2host
[17:54] <vhasi> some LACP implementations take src/dst port into account, but most only use IP and/or MAC
[17:54] <iggy> then you are more likely to max those links out
[17:55] <mo-> whats weird is that iptraf shows a port usage below 90%
[17:55] <brad_mssw> anyhow, bonding is still good ... you do get some performance advantage out of it ... but you also get fault tolerance if your bond spans more than one switch in a stack ... which I find more important
[17:55] <mo-> dont have access to accurate numbers about that atm tho
[17:55] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) has joined #ceph
[17:56] <mo-> yea, its currently 2x2 interfaces, connected to different (yet stacked) switches
[17:56] <mo-> so the fault tolerance is there, just the bandwidth seems to be iffy
[17:56] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) Quit ()
[17:56] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[17:57] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[17:57] <brad_mssw> everyone always says if you don't have the money for 10gE, go infiniband .... but I've never tried that
[17:57] * i_m1 (~ivan.miro@gbibp9ph1--blueice2n2.emea.ibm.com) Quit (Read error: Operation timed out)
[17:57] <mo-> Ive seen some crazy folk over on the proxmox forums use infiniband interconnects without a switch (i.e. a shit ton of cables) to connect 3-5 nodes
[17:57] <iggy> not really cheaper
[17:57] <mo-> and yea, IB isnt any cheaper than 10GE
[17:57] <brad_mssw> well, they were talking about ebaying it
[17:58] <mo-> yea...
[17:58] <mo-> you dont do that in corporate environments tho
[17:58] <mo-> like.. ever
[17:58] <brad_mssw> yep, like I said, haven't tried it
[17:58] <brad_mssw> I've got my 10gE Junipers coming ;)
[17:59] <mo-> so the way Im looking at it, this 102MB/s might actually be maxing out the 2GBit cluster networking backend
[17:59] <janos_> i don't quite get why you don't ever do that in corp environments if you build in redundancy though. unless you really are expecting everything to fail at once and i haven't heard of that happening without external forces
[18:00] * lalatenduM (~lalatendu@121.244.87.117) Quit (Quit: Leaving)
[18:00] <mo-> because: you dont get any warranty from ebay, you cant properly book it with accounting...
[18:00] <mo-> and you cant get 3 separate price offers from competitors (can be a required policy)
[18:00] <janos_> ifi counted the number of times i've needed warranty... i'd be done at zero
[18:01] <mo-> still, I just cannot recommend buying hardware that has ZERO support if it ever failed
[18:01] <janos_> my job is to suuport my choices
[18:01] <brad_mssw> janos_: it may not be strictly warranty, but 'support' in order to download software updates ... for you know, that whole pesky security thing
[18:02] <janos_> it's always felt like a really bad game
[18:02] <janos_> i go out of my way to avoid it
[18:02] <mo-> let me interject this: thanks for the discussion thus far. results have been great
[18:03] <janos_> i will admit to looking into the IB ebay route for home though ;)
[18:03] <iggy> mo-: I suspect you are correct... you are probably maxing out your interconnects at that point
[18:04] <janos_> though i'd prefer that 10GBe prices just come down
[18:04] <mo-> 2/5 of those boxes even have 10GE ports, its just that the other 3 dont
[18:04] <mo-> mainly because those 2 where initially bought as DRBD/iscsi-boxes
[18:05] <mo-> but, hm. maybe its not that simple though, consider this:
[18:06] <mo-> while box A can only write copies to other nodes with 200MB/s, it actually can receive writes from other nodes with 200MB/s at the same time (full duplex)
[18:06] <mo-> so the ceph bw should be higher, no?
[18:08] * bkopilov (~bkopilov@213.57.17.204) has joined #ceph
[18:08] <iggy> you need to have a multi-threaded, multi-host benchmark
[18:08] <mo-> yea afaik there is such a thing
[18:09] <iggy> or you're still coming back to single host to host bandwidth of 1gbit
[18:09] * jtaguinerd (~jtaguiner@112.205.13.239) Quit (Ping timeout: 480 seconds)
[18:09] <mo-> what was the name again, one sec
[18:09] <mo-> fio
[18:09] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[18:09] <mo-> thats the one
[18:09] * bandrus (~Adium@66.87.130.146) has joined #ceph
[18:10] <iggy> or rados bench
[18:11] <mo-> yea. fio has been used to eliminate the possibility of rados overhead (which dont actually seem to be present all that much)
[18:11] * rmoe (~quassel@12.164.168.117) has joined #ceph
[18:13] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) has joined #ceph
[18:14] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[18:15] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[18:15] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[18:16] <mo-> if youre interrested http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
[18:22] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:23] <iggy> rados overhead?
[18:24] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) Quit (Read error: Connection reset by peer)
[18:24] <iggy> rados is still involved... it's the core of ceph
[18:24] * koleosfuscus (~koleosfus@77.47.66.235.dynamic.cablesurf.de) Quit (Quit: koleosfuscus)
[18:24] * wschulze (~wschulze@cpe-69-206-251-158.nyc.res.rr.com) has joined #ceph
[18:24] * ScOut3R (~ScOut3R@catv-80-99-64-8.catv.broadband.hu) Quit (Ping timeout: 480 seconds)
[18:25] <mo-> yes of course, but the test I read wanted to check, whether rados introduced any overhead over fio "raw" tests, and turns out it kinda didnt
[18:25] * kevinc (~kevinc__@client65-40.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[18:27] * sz0 (~sz0@46.197.39.119) has joined #ceph
[18:29] * kevinc (~kevinc__@client65-40.sdsc.edu) has joined #ceph
[18:30] * kalleh (~kalleh@37-46-175-162.customers.ownit.se) Quit (Ping timeout: 480 seconds)
[18:31] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) Quit (Quit: neurodrone)
[18:31] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) has joined #ceph
[18:32] * rturk is now known as rturk|afk
[18:32] * rturk|afk is now known as rturk
[18:32] * thomnico (~thomnico@2a01:e35:8b41:120:5b7:24b1:2fe7:700f) Quit (Quit: Ex-Chat)
[18:33] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[18:35] * sz0 (~sz0@46.197.39.119) Quit (Quit: My iMac has gone to sleep. ZZZzzz???)
[18:39] * kevinc (~kevinc__@client65-40.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[18:39] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[18:42] * pressureman_ (~daniel@f052200173.adsl.alicedsl.de) has joined #ceph
[18:44] <pressureman_> hi, is anyone able to successfully hotplug an RBD with libvirt + qemu "virsh attach-device" ?
[18:44] <pressureman_> all i get is a hung virsh command, and often a CPU stall in the guest
[18:44] <pressureman_> hotplugging a qcow2 image file using the same method works ok, so the guest is capable of hotplug
[18:45] * tracphil (~tracphil@130.14.71.217) has joined #ceph
[18:48] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[18:53] * sputnik1_ (~sputnik13@207.8.121.241) has joined #ceph
[18:55] * kevinc (~kevinc__@2607:f720:f00:4042:f908:258f:ab51:78cb) has joined #ceph
[18:55] * sputnik1_ is now known as sputnik13net
[18:56] * sarob (~sarob@ip-64-134-224-227.public.wayport.net) has joined #ceph
[18:56] * adamcrume (~quassel@50.247.81.99) Quit (Remote host closed the connection)
[19:00] * rweeks (~rweeks@pat.hitachigst.com) has joined #ceph
[19:00] * jordanP (~jordan@185.23.92.11) Quit (Quit: Leaving)
[19:01] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[19:01] * LPG (~LPG@c-76-104-197-224.hsd1.wa.comcast.net) has joined #ceph
[19:03] * sz0 (~sz0@46.197.39.119) has joined #ceph
[19:05] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[19:07] * fghaas (~florian@91-119-141-13.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[19:11] * vmx (~vmx@p508A4C25.dip0.t-ipconnect.de) Quit (Quit: Leaving)
[19:17] * sjm (~sjm@pool-72-76-115-220.nwrknj.fios.verizon.net) has joined #ceph
[19:21] * sz0 (~sz0@46.197.39.119) Quit (Quit: My iMac has gone to sleep. ZZZzzz???)
[19:22] * JC1 (~JC@AMontpellier-651-1-446-248.w81-251.abo.wanadoo.fr) has joined #ceph
[19:23] * adamcrume (~quassel@2601:9:6680:47:2050:9cdc:f971:c56c) has joined #ceph
[19:29] * rotbeard (~redbeard@2a02:908:df19:9900:76f0:6dff:fe3b:994d) has joined #ceph
[19:29] * sz0 (~sz0@46.197.39.119) has joined #ceph
[19:29] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[19:29] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[19:30] * JC (~JC@AMontpellier-651-1-446-248.w81-251.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[19:31] * markbby (~Adium@168.94.245.3) has joined #ceph
[19:34] * bandrus (~Adium@66.87.130.146) Quit (Quit: Leaving.)
[19:37] * Cube (~Cube@72.21.82.34) has joined #ceph
[19:39] * JayJ (~jayj@157.130.21.226) has joined #ceph
[19:42] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) Quit (Quit: Leaving.)
[19:43] <JayJ> Hello experts. I have a question on OSD hardware selection. We are starting deploy a ceph cluster in the lab. Hardware wise for OSDs, do I need to get a RAID controller? My rading suggests that each disk is configured as a OSD. In that case can I configure a system with NO RAID controller at all? Or should I need to get a RAID controller that supports JBOD?
[19:44] * dmick (~dmick@2607:f298:a:607:6533:c9b:feb1:1fc0) Quit (Quit: Leaving.)
[19:44] * BManojlovic (~steki@cable-94-189-160-74.dynamic.sbb.rs) has joined #ceph
[19:45] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Remote host closed the connection)
[19:46] * Nacer (~Nacer@252-87-190-213.intermediasud.com) has joined #ceph
[19:46] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph
[19:46] * kevinc (~kevinc__@2607:f720:f00:4042:f908:258f:ab51:78cb) Quit (Quit: This computer has gone to sleep)
[19:46] <mo-> you dont need a raid controller. but depending on the size of the system you may need a disk expander card to get more disk slots
[19:47] <mo-> this is assuming that the onboard disk controller is not complete garbage
[19:47] * pressureman_ (~daniel@f052200173.adsl.alicedsl.de) Quit (Quit: Ex-Chat)
[19:49] * wrencsok (~wrencsok@wsip-174-79-34-244.ph.ph.cox.net) has joined #ceph
[19:49] <JayJ> mo: Thank you.
[19:50] <JayJ> mo: do you use the onboard controller for root volumes? (and nothing to do with OSDs?) Is that correct?
[19:50] <mo-> for the OS? yes
[19:51] <mo-> for a test lab, you dont need additional controllers. once youre talking about production environments though, you may benefit from controllers (if they have battery backed caches)
[19:52] <mo-> typically in enterprise servers you have 2 disks on the onboard controller in a RAID1 for the OS and then use raid/expander cards for OSDs
[19:55] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[19:56] * tdb (~tdb@myrtle.kent.ac.uk) Quit (Ping timeout: 480 seconds)
[19:56] * tdb (~tdb@myrtle.kent.ac.uk) has joined #ceph
[19:57] * markbby (~Adium@168.94.245.3) has joined #ceph
[19:59] <JayJ> mo: thanks agan.
[19:59] <JayJ> again
[20:00] <JayJ> mo-: kept missing the underscore. Sorry!
[20:00] <mo-> its fine, youre welcome though
[20:02] <liiwi> /win 16
[20:02] <liiwi> blerp
[20:04] * Nacer (~Nacer@252-87-190-213.intermediasud.com) Quit (Read error: Operation timed out)
[20:11] * kevinc (~kevinc__@client65-40.sdsc.edu) has joined #ceph
[20:12] * tracphil (~tracphil@130.14.71.217) Quit (Quit: leaving)
[20:12] * dneary (~dneary@87-231-145-225.rev.numericable.fr) Quit (Ping timeout: 480 seconds)
[20:15] * lalatenduM (~lalatendu@122.172.215.173) has joined #ceph
[20:15] * lalatenduM (~lalatendu@122.172.215.173) Quit ()
[20:16] * lcavassa (~lcavassa@89.184.114.246) Quit (Quit: Leaving)
[20:17] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) Quit (Quit: Leaving.)
[20:20] * rf (~rlf@186.122.46.231) has joined #ceph
[20:20] <rf> hola
[20:26] * dmick (~dmick@2607:f298:a:607:893d:ecf6:b7a3:6596) has joined #ceph
[20:26] * Tamil1 (~Adium@cpe-108-184-74-11.socal.res.rr.com) has joined #ceph
[20:27] <rf> hi
[20:28] * tinytim (~tinytim@200.79.255.125) has joined #ceph
[20:30] * vilobhmm (~vilobhmm@nat-dip27-wl-a.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:31] * tinytim (~tinytim@200.79.255.125) Quit (autokilled: Do not spam. mail support@oftc.net (2014-07-15 18:31:11))
[20:31] * TMM (~hp@178-84-46-106.dynamic.upc.nl) has joined #ceph
[20:32] <JayJ> mo-: In a three ndoe cluster, how many mons & rados GWs would you create? is there some sort of recommendation on OSDs per node, Mons per cluster and RadosGWs?
[20:35] <mo-> 3 mons. always an uneven number of mons. no idea about radosgw, never used that and most likely never will, just dont see a use case
[20:36] <mo-> OSDs per node: not more than what your network bandwidth allows for input. 2-12 is usually fine, wouldnt use more than that though
[20:36] * sz0 (~sz0@46.197.39.119) Quit (Quit: My iMac has gone to sleep. ZZZzzz???)
[20:37] <gchristensen> on top of the 2-12 OSDs per node is you might have reduced performance if you use a backplane to add in extra HDs, so you might have better luck having more machines with fewer HDs
[20:38] * madkiss (~madkiss@2001:6f8:12c3:f00f:eda4:a4c1:d891:d1ac) has joined #ceph
[20:38] <mo-> yea that too. internal io bandwidth can become an issue >6 disks
[20:39] * aldavud (~aldavud@213.55.184.148) has joined #ceph
[20:40] * rendar (~I@host217-119-dynamic.53-82-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[20:42] * rendar (~I@host217-119-dynamic.53-82-r.retail.telecomitalia.it) has joined #ceph
[20:43] <JayJ> Thanks!
[20:46] * rf (~rlf@186.122.46.231) has left #ceph
[20:46] * sz0 (~sz0@46.197.39.119) has joined #ceph
[20:51] * sz0 (~sz0@46.197.39.119) Quit ()
[20:55] * rturk is now known as rturk|afk
[20:56] * linuxkidd_ (~linuxkidd@2001:420:2280:1272:b0bd:57cb:f985:4d95) Quit (Quit: Leaving)
[20:59] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[20:59] * bandrus (~Adium@66-87-130-146.pools.spcsdns.net) has joined #ceph
[21:00] * sjm (~sjm@pool-72-76-115-220.nwrknj.fios.verizon.net) has left #ceph
[21:04] * Cube (~Cube@72.21.82.34) Quit (Quit: Leaving.)
[21:08] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) has joined #ceph
[21:12] * bandrus (~Adium@66-87-130-146.pools.spcsdns.net) Quit (Quit: Leaving.)
[21:13] * Guyou (~bonnefil@mrb31-1-88-184-0-166.fbx.proxad.net) has joined #ceph
[21:14] * Guyou (~bonnefil@mrb31-1-88-184-0-166.fbx.proxad.net) has left #ceph
[21:16] * Cube (~Cube@72.21.82.34) has joined #ceph
[21:17] * kevinc (~kevinc__@client65-40.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[21:20] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Quit: Leaving.)
[21:21] * sarob (~sarob@ip-64-134-224-227.public.wayport.net) Quit (Quit: Leaving...)
[21:22] <iggy> and it really depends on your work load as well
[21:23] <iggy> if you are running a massive archiving system, you might be less likely to hit per node disk limits as soon
[21:29] * kevinc (~kevinc__@client65-40.sdsc.edu) has joined #ceph
[21:33] * rturk|afk is now known as rturk
[21:34] <mo-> true
[21:36] * bandrus (~Adium@66-87-130-146.pools.spcsdns.net) has joined #ceph
[21:36] * aldavud (~aldavud@213.55.184.148) Quit (Ping timeout: 480 seconds)
[21:37] * ikrstic (~ikrstic@109-92-251-185.dynamic.isp.telekom.rs) has joined #ceph
[21:42] * kevinc (~kevinc__@client65-40.sdsc.edu) Quit (Quit: This computer has gone to sleep)
[21:42] * rwheeler (~rwheeler@nat-pool-bos-t.redhat.com) Quit (Quit: Leaving)
[21:43] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[21:43] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit ()
[21:44] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) has joined #ceph
[21:44] * sarob (~sarob@ip-64-134-224-227.public.wayport.net) has joined #ceph
[21:47] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) has joined #ceph
[21:49] * sarob (~sarob@ip-64-134-224-227.public.wayport.net) Quit ()
[21:56] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) Quit (Ping timeout: 480 seconds)
[22:02] * madkiss1 (~madkiss@chello084112124211.20.11.vie.surfer.at) has joined #ceph
[22:03] * madkiss (~madkiss@2001:6f8:12c3:f00f:eda4:a4c1:d891:d1ac) Quit (Ping timeout: 480 seconds)
[22:04] * Cube (~Cube@72.21.82.34) Quit (Quit: Leaving.)
[22:05] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[22:06] * aldavud (~aldavud@217-162-119-191.dynamic.hispeed.ch) has joined #ceph
[22:07] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[22:09] * yguang11 (~yguang11@vpn-nat.corp.tw1.yahoo.com) Quit (Ping timeout: 480 seconds)
[22:12] * kevinc (~kevinc__@client65-40.sdsc.edu) has joined #ceph
[22:14] * tdasilva (~quassel@nat-pool-bos-u.redhat.com) has joined #ceph
[22:14] * scuttlemonkey is now known as scuttle|afk
[22:21] * lx0 (~aoliva@lxo.user.oftc.net) Quit (Remote host closed the connection)
[22:22] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[22:26] * JC (~JC@AMontpellier-651-1-446-248.w81-251.abo.wanadoo.fr) has joined #ceph
[22:26] * Sysadmin88 (~IceChat77@94.4.20.0) has joined #ceph
[22:29] * analbeard (~shw@support.memset.com) has joined #ceph
[22:33] * JC1 (~JC@AMontpellier-651-1-446-248.w81-251.abo.wanadoo.fr) Quit (Ping timeout: 480 seconds)
[22:40] * bandrus (~Adium@66-87-130-146.pools.spcsdns.net) Quit (Quit: Leaving.)
[22:43] * narb (~Jeff@38.99.52.10) Quit (Read error: Connection reset by peer)
[22:43] * narb_ (~Jeff@38.99.52.10) has joined #ceph
[22:45] * mondkalbantrieb (~quassel@mondkalbantrieb.de) Quit (Remote host closed the connection)
[22:45] * mondkalbantrieb (~quassel@sama32.de) has joined #ceph
[22:46] * vmx (~vmx@dslb-084-056-057-004.pools.arcor-ip.net) has joined #ceph
[22:49] * japuzzo (~japuzzo@pok2.bluebird.ibm.com) Quit (Quit: Leaving)
[22:52] * sjm (~sjm@pool-72-76-115-220.nwrknj.fios.verizon.net) has joined #ceph
[23:06] * zidarsk8 (~zidar@89-212-142-10.dynamic.t-2.net) has joined #ceph
[23:06] * rwheeler (~rwheeler@nat-pool-bos-u.redhat.com) Quit (Quit: Leaving)
[23:10] * sarob (~sarob@ip-64-134-224-227.public.wayport.net) has joined #ceph
[23:10] * Nacer (~Nacer@c2s31-2-83-152-89-219.fbx.proxad.net) Quit (Remote host closed the connection)
[23:15] * JayJ (~jayj@157.130.21.226) Quit (Quit: Computer has gone to sleep.)
[23:17] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has joined #ceph
[23:17] * ChanServ sets mode +v andreask
[23:18] * CephFan1 (~textual@68-233-224-175.static.hvvc.us) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[23:20] * zerick (~eocrospom@190.187.21.53) Quit (Remote host closed the connection)
[23:21] * Cube (~Cube@66-87-64-189.pools.spcsdns.net) has joined #ceph
[23:22] * andreask (~andreask@h081217017238.dyn.cm.kabsi.at) has left #ceph
[23:23] * brad_mssw (~brad@shop.monetra.com) Quit (Quit: Leaving)
[23:32] * garphy is now known as garphy`aw
[23:35] * tdasilva (~quassel@nat-pool-bos-u.redhat.com) Quit (Read error: Operation timed out)
[23:36] * ikrstic (~ikrstic@109-92-251-185.dynamic.isp.telekom.rs) Quit (Quit: Konversation terminated!)
[23:40] * markbby (~Adium@168.94.245.3) Quit (Quit: Leaving.)
[23:50] * dspano (~dspano@rrcs-24-103-221-202.nys.biz.rr.com) Quit (Quit: leaving)
[23:51] * lcavassa (~lcavassa@94.166.88.97) has joined #ceph
[23:59] * tdasilva (~quassel@nat-pool-bos-t.redhat.com) has joined #ceph

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.