#ceph IRC Log

Index

IRC Log for 2013-12-27

Timestamps are in GMT/BST.

[0:00] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[0:05] * Cube (~Cube@66-87-64-47.pools.spcsdns.net) has joined #ceph
[0:08] * DarkAceZ (~BillyMays@50-32-4-78.drr01.hrbg.pa.frontiernet.net) Quit (Read error: Operation timed out)
[0:08] * dis (~dis@109.110.66.145) has joined #ceph
[0:09] * DarkAceZ (~BillyMays@50-32-4-189.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[0:15] * allsystemsarego (~allsystem@5-12-240-107.residential.rdsnet.ro) Quit (Quit: Leaving)
[0:19] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[0:25] * Pedras1 (~Adium@172-2-241-104.lightspeed.sntcca.sbcglobal.net) has joined #ceph
[0:25] * Pedras (~Adium@172-2-241-104.lightspeed.sntcca.sbcglobal.net) Quit (Read error: Connection reset by peer)
[0:25] * danieagle (~Daniel@186.214.75.29) Quit (Quit: inte+ e Obrigado Por tudo mesmo! :-D)
[0:39] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[0:44] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) Quit (Remote host closed the connection)
[0:44] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) has joined #ceph
[0:44] * cronix (~cronix@5.199.139.166) has joined #ceph
[0:44] * Pedras1 (~Adium@172-2-241-104.lightspeed.sntcca.sbcglobal.net) Quit (Quit: Leaving.)
[0:52] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) Quit (Ping timeout: 480 seconds)
[0:58] * cronix (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[1:06] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) has joined #ceph
[1:45] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) Quit (Remote host closed the connection)
[1:46] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) has joined #ceph
[1:47] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[1:49] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) has joined #ceph
[1:54] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) Quit (Ping timeout: 480 seconds)
[1:54] * zerick (~eocrospom@190.187.21.53) Quit (Read error: Operation timed out)
[2:03] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[2:18] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) Quit (Quit: ZNC - http://znc.in)
[2:19] * yanzheng (~zhyan@134.134.137.71) has joined #ceph
[2:22] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) has joined #ceph
[2:40] * dlan (~dennis@116.228.88.131) has joined #ceph
[2:56] * xianxia (~chatzilla@222.240.177.42) has joined #ceph
[3:00] <xianxia> Hi,i am useing ceph-qa suit???but how to configure it with teuthology,it always show "./schedule_suite.sh: ./virtualenv/bin/teuthology-suite: /root/teuthology/virtualenv/bin/python: bad interpreter: No such file or directory"
[3:02] <xianxia> is it need put ceph-qa-suite and teuthology under ~/src?
[3:03] * Tamil2 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has joined #ceph
[3:05] <xianxia> Oh,i see,i need to reinstall teuthology after remove it to other dir
[3:05] * nwat (~textual@adsl-99-120-180-11.dsl.tul2ok.sbcglobal.net) has joined #ceph
[3:06] <dmick> xianxia: yes, python virtualenvs are not relocatable
[3:07] <xianxia> yes,thanks dmick
[3:09] * Tamil1 (~Adium@cpe-76-168-18-224.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:10] <xianxia> dmick,there is another question,how can i solve the error :"socket.error: [Errno 110] Connection timed out"
[3:11] <xianxia> is there any options set connection time out?
[3:17] * xianxia_ (~chatzilla@61.187.54.9) has joined #ceph
[3:17] <dmick> I don't know, but my first question would be "which connection"
[3:18] <dmick> and do you know that it's not a failure
[3:19] <xianxia_> may be connection to testing machine?
[3:19] <xianxia_> socket connection?
[3:19] <dmick> anything's possible, but I can't tell from that error message which connection
[3:20] * dpippenger (~riven@cpe-198-72-154-134.socal.res.rr.com) has joined #ceph
[3:21] <xianxia_> I am still confused how the ceph-qa-suit combined with teuthoogy
[3:22] <xianxia_> http://dachary.org/?p=2229
[3:22] * xianxia (~chatzilla@222.240.177.42) Quit (Ping timeout: 480 seconds)
[3:22] * xianxia_ is now known as xianxia
[3:23] * nwat (~textual@adsl-99-120-180-11.dsl.tul2ok.sbcglobal.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[3:24] <dmick> the qa suites are teuthology jobs. they use teuthology to run.
[3:26] <dmick> last worker log at 16:10; PID 24616 is stuck on a write to stderr
[3:27] <dmick> which is apparently a pts
[3:29] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) has joined #ceph
[3:30] <dmick> sorry, wrong window for the last two
[3:35] * nwat (~textual@99.120.180.11) has joined #ceph
[3:35] * nwat (~textual@99.120.180.11) Quit ()
[3:35] * Tamil2 (~Adium@cpe-76-168-18-224.socal.res.rr.com) has left #ceph
[3:53] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[4:00] * cronix (~cronix@5.199.139.166) has joined #ceph
[4:00] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[4:01] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) Quit (Quit: Leaving.)
[4:04] * cronix (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[4:18] * hugo (~hugo@42-75-228-172.dynamic-ip.hinet.net) has joined #ceph
[4:21] * dmsimard (~Adium@69-165-206-93.cable.teksavvy.com) Quit (Quit: Leaving.)
[4:46] * nwat (~textual@99.120.180.11) has joined #ceph
[4:54] * nwat (~textual@99.120.180.11) Quit (Ping timeout: 480 seconds)
[4:55] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[4:58] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[5:06] * fireD (~fireD@93-142-227-137.adsl.net.t-com.hr) has joined #ceph
[5:06] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[5:07] * fireD_ (~fireD@93-142-241-71.adsl.net.t-com.hr) Quit (Ping timeout: 480 seconds)
[5:12] * Hakisho (~Hakisho@0001be3c.user.oftc.net) Quit (Remote host closed the connection)
[5:15] * Hakisho (~Hakisho@0001be3c.user.oftc.net) has joined #ceph
[5:16] * nwat (~textual@adsl-99-120-180-11.dsl.tul2ok.sbcglobal.net) has joined #ceph
[5:17] * nwat (~textual@adsl-99-120-180-11.dsl.tul2ok.sbcglobal.net) Quit ()
[5:20] * lofejndif (~lsqavnbok@9YYAAH6OO.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[5:27] * Vacum_ (~vovo@88.130.205.51) has joined #ceph
[5:30] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[5:31] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[5:34] * Vacum (~vovo@88.130.201.157) Quit (Ping timeout: 480 seconds)
[5:39] * DarkAceZ (~BillyMays@50-32-4-189.drr01.hrbg.pa.frontiernet.net) Quit (Read error: Operation timed out)
[5:41] * DarkAceZ (~BillyMays@50-32-49-202.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[5:43] * nwat (~textual@adsl-99-120-180-11.dsl.tul2ok.sbcglobal.net) has joined #ceph
[5:51] * Cube (~Cube@66-87-64-47.pools.spcsdns.net) Quit (Read error: Connection reset by peer)
[5:51] * nwat (~textual@adsl-99-120-180-11.dsl.tul2ok.sbcglobal.net) Quit (Ping timeout: 480 seconds)
[5:53] * Pedras (~Adium@c-67-188-26-20.hsd1.ca.comcast.net) Quit (Quit: Leaving.)
[5:59] * julian (~julianwa@125.70.135.60) has joined #ceph
[6:14] * yeled (~yeled@spodder.com) Quit (Remote host closed the connection)
[6:14] * yeled (~yeled@spodder.com) has joined #ceph
[6:30] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) Quit (Remote host closed the connection)
[6:35] * hugo (~hugo@42-75-228-172.dynamic-ip.hinet.net) Quit (Ping timeout: 480 seconds)
[6:38] * hugo (~hugo@42-75-228-172.dynamic-ip.hinet.net) has joined #ceph
[6:41] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) has joined #ceph
[6:44] * dmick (~dmick@2607:f298:a:607:e4c2:4783:f4e2:29cd) has left #ceph
[6:50] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) has joined #ceph
[7:00] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[7:11] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:14] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[7:25] * madkiss (~madkiss@089144201228.atnat0010.highway.a1.net) has joined #ceph
[7:26] * hugo (~hugo@42-75-228-172.dynamic-ip.hinet.net) Quit (Quit: Leaving...)
[7:28] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) has joined #ceph
[7:41] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) has left #ceph
[7:42] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Quit: Leaving.)
[7:42] * sherry (~sherry@mike-alien.esc.auckland.ac.nz) has joined #ceph
[7:56] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[7:56] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[7:58] * i_m (~ivan.miro@95.180.8.206) has joined #ceph
[8:05] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[8:07] * sleinen1 (~Adium@2001:620:0:25:98d5:c199:2703:35ba) has joined #ceph
[8:10] * sarob (~sarob@2601:9:7080:13a:6c0f:2cd6:8f82:e43) has joined #ceph
[8:13] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[8:18] * sarob (~sarob@2601:9:7080:13a:6c0f:2cd6:8f82:e43) Quit (Ping timeout: 480 seconds)
[8:40] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[8:55] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) Quit (Quit: shimo)
[8:58] * shimo (~A13032@122x212x216x66.ap122.ftth.ucom.ne.jp) has joined #ceph
[8:59] * BManojlovic (~steki@91.195.39.5) has joined #ceph
[9:01] * xianxia (~chatzilla@61.187.54.9) Quit (Quit: ChatZilla 0.9.90.1 [Firefox 24.0/20130910160258])
[9:02] * rendar (~s@host212-176-dynamic.1-87-r.retail.telecomitalia.it) has joined #ceph
[9:19] * mtk (~mtk@ool-44c35983.dyn.optonline.net) Quit (Ping timeout: 480 seconds)
[9:29] * mtk (~mtk@ool-44c35983.dyn.optonline.net) has joined #ceph
[9:40] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[9:40] * ChanServ sets mode +v andreask
[9:42] * hjjg (~hg@p3EE3222B.dip0.t-ipconnect.de) has joined #ceph
[9:47] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Ping timeout: 480 seconds)
[9:53] * hjjg_ (~hg@p3EE31F97.dip0.t-ipconnect.de) has joined #ceph
[9:55] * hjjg (~hg@p3EE3222B.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:55] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[10:00] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[10:08] * yanzheng (~zhyan@134.134.137.71) Quit (Remote host closed the connection)
[10:15] * cronix (~cronix@5.199.139.166) has joined #ceph
[10:20] * cronix (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[10:30] * cronix (~cronix@5.199.139.166) has joined #ceph
[10:32] <glambert> how do I get my ceph cluster to operate when one node goes down?
[10:32] <glambert> three node cluster
[10:32] <glambert> just powered a machine off to test and the cluster wont respond now
[10:33] <madkiss> well
[10:33] <madkiss> do you have your OSD journals on SSDs?
[10:39] * cronix (~cronix@5.199.139.166) Quit (Ping timeout: 480 seconds)
[10:39] <glambert> no
[10:48] <glambert> madkiss, if I remove an OSD and then add a new one in, will it replicate the data from the other OSDs to the new one automatically?
[10:49] <madkiss> that's the plan, if your replication policy specifies it
[10:49] <glambert> I believe so, it's a pretty simple setup I just forgot I'd followed the setup to the letter and created OSDs in /tmp :/
[10:53] * cronix (~cronix@5.199.139.166) has joined #ceph
[10:57] * julian (~julianwa@125.70.135.60) Quit (Quit: afk)
[10:57] * cronix (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[10:59] * cronix (~cronix@5.199.139.166) has joined #ceph
[11:00] <rendar> where i can find a complete ceph guide in pdf?
[11:01] <andreask> rendar: never heard of a pdf version
[11:02] <sherry> render: there is one for ceph developers
[11:04] <sherry> http://www.google.co.nz/url?sa=t&rct=j&q=&esrc=s&source=web&cd=4&ved=0CDsQFjAD&url=http%3A%2F%2Ftracker.ceph.com%2Fattachments%2Fdownload%2F486%2Fceph.pdf&ei=cVC9UqP6D9DDkAXG2YCAAg&usg=AFQjCNEpK98jVds83aBM33CgbBwSBv5nfQ&bvm=bv.58187178,d.dGI&cad=rja
[11:05] <glambert> how do I remove a node completely from my cluster and add it in again? one of my nodes doesn't have the ceph-osd package installed somehow
[11:05] <glambert> everything was fine until I switched it off
[11:07] * hjorth_ (~hjorth@sig9.kill.dk) has joined #ceph
[11:08] * hjorth (~hjorth@2a01:77c0:0:f000:20c:29ff:fec5:b51b) Quit (Ping timeout: 480 seconds)
[11:13] * cronix (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[11:14] * tziOm (~bjornar@194.19.106.242) has joined #ceph
[11:21] <madkiss> glambert: I don't mean to be impolite, but ??? what are you doing over there? ;-)
[11:21] <glambert> sorry?
[11:25] <madkiss> Have you set up Ceph using ceph-osd?
[11:25] <madkiss> sorry
[11:25] <madkiss> ceph-deploy, of course.
[11:25] <glambert> yes
[11:25] <glambert> I just followed the quick start
[11:25] <madkiss> then how can the host not have ceph-osd?
[11:26] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) Quit (Remote host closed the connection)
[11:26] <glambert> *shrugs*
[11:26] <glambert> dpkg --get-selections | grep osd = nothing
[11:26] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) has joined #ceph
[11:26] <glambert> got ceph, ceph-mds, ceph-common etc
[11:26] <glambert> ceph-fs-common
[11:26] <madkiss> did you use ceph-deploy prepare or install?
[11:26] <glambert> python-ceph
[11:27] <glambert> install
[11:27] <glambert> the cluster has been operating for weeks absolutely fine
[11:27] <glambert> just decided to pull the plug and see what happened really
[11:27] <glambert> semi-glad I did
[11:28] <madkiss> and you had OSDs in /tmp?!
[11:29] <glambert> yes
[11:30] * cronix (~cronix@5.199.139.166) has joined #ceph
[11:30] <glambert> I'd forgotten about that until after I'd booted it back up
[11:34] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) Quit (Ping timeout: 480 seconds)
[11:36] * jbd_ (~jbd_@2001:41d0:52:a00::77) has joined #ceph
[11:39] * cronix (~cronix@5.199.139.166) Quit (Ping timeout: 480 seconds)
[11:51] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) Quit (Read error: Operation timed out)
[11:56] * diegows (~diegows@190.190.17.57) has joined #ceph
[11:57] * sleinen1 (~Adium@2001:620:0:25:98d5:c199:2703:35ba) Quit (Quit: Leaving.)
[11:57] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[12:02] * thorus (~jonas@82.199.158.66) has joined #ceph
[12:03] <thorus> I'm using ceph version 0.61.9 (7440dcd135750839fa0f00263f80722ff6f51e90). ceph osd repair 12 gives me unknown command but ceph --help shows it as an option. Is it not availible in this version?
[12:05] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[12:28] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[12:40] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[12:43] * allsystemsarego (~allsystem@5-12-240-107.residential.rdsnet.ro) has joined #ceph
[12:50] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has joined #ceph
[12:52] * zidarsk8 (~zidar@84-255-203-33.static.t-2.net) has left #ceph
[12:53] * lx0 (~aoliva@lxo.user.oftc.net) has joined #ceph
[12:59] * madkiss (~madkiss@089144201228.atnat0010.highway.a1.net) Quit (Quit: Leaving.)
[13:00] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:03] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) has joined #ceph
[13:07] * andreask (~andreask@h081217067008.dyn.cm.kabsi.at) has joined #ceph
[13:07] * ChanServ sets mode +v andreask
[13:33] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[13:41] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[13:43] * Koma (~Koma@2-235-211-148.ip230.fastwebnet.it) has joined #ceph
[13:43] <Koma> *CephLogBot* This channel is logged <- Hi NSA!
[13:47] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[13:47] * madkiss (~madkiss@089144201228.atnat0010.highway.a1.net) has joined #ceph
[13:48] <dzianis_> Hi all. Probably a stupid question... Can I combine different file systems on OSDs?
[14:03] <jks> as far as I know, yes
[14:03] <jks> I don't know why you would do it, but I have used mixed btrfs and xfs before
[14:18] * rendar (~s@host212-176-dynamic.1-87-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[14:20] * rendar (~s@host83-179-dynamic.56-79-r.retail.telecomitalia.it) has joined #ceph
[14:26] * gaveen (~gaveen@175.157.10.103) has joined #ceph
[14:29] * lightspeed (~lightspee@82-68-190-217.dsl.in-addr.zen.co.uk) Quit (Ping timeout: 480 seconds)
[14:29] <dzianis_> Thanks
[14:31] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:33] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[14:38] * sroy (~sroy@208.88.110.46) has joined #ceph
[14:41] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) has joined #ceph
[14:41] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[14:54] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) Quit (Quit: Leaving.)
[14:55] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[14:55] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has left #ceph
[15:02] * fouxm (~fouxm@AOrleans-258-1-53-232.w90-24.abo.wanadoo.fr) has joined #ceph
[15:03] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[15:10] <glambert> any way to see the filepaths to the OSDs?
[15:12] * mozg (~andrei@46.229.149.194) has joined #ceph
[15:13] <mozg> hello guys
[15:13] <mozg> happy holidays to everyone
[15:14] <mozg> was wondering if someone here could show some light on the issues that I am having with ceph rbd + qemu
[15:14] <mozg> I am having some occasional performance issues, which come and go
[15:15] <mozg> when this happens my vms become very unresponsive
[15:15] <mozg> with high io wait across all vms
[15:15] <mozg> the issue might last for up to 10 minutes
[15:15] <mozg> but usually around 2-5 minutes
[15:15] <mozg> and all of a sudden everything starts working well.
[15:16] <mozg> looking at the logs of the host servers I do not see any errors
[15:16] <mozg> the ceph osd/mon logs are also looking good as far as I can tell
[15:16] <mozg> there is not a great deal of activity on the osd/mon servers when this happens
[15:17] <mozg> i am running ceph 0.67.4
[15:17] <mozg> libvirt 1.1.4 and qemu 1.5.0
[15:17] <mozg> on ubuntu 12.04 servers
[15:21] <mozg> initially i thought it's a networking issue, but i can't find any evidence of that
[15:21] <mozg> no errors or drops on the interface
[15:21] <mozg> the speed is 40gbit/s ipoib
[15:22] <mozg> where do I begin?
[15:23] <glambert> networking would've been my bet but if there's no issue there
[15:24] <glambert> any pattern to it? do they get high io time after boot? or at a specific time or time of day?
[15:24] * mozg (~andrei@46.229.149.194) Quit (Remote host closed the connection)
[15:24] <glambert> guess not
[15:25] * mtanski (~mtanski@cpe-74-65-252-48.nyc.res.rr.com) Quit (Quit: mtanski)
[15:29] * mozg (~andrei@46.229.149.194) has joined #ceph
[15:30] <mozg> glambert, yeah, that's the first thing i've checked
[15:30] <mozg> also, I do not recall having these issues when I was on a previous version - 0.61 i believe
[15:30] <mozg> any idea where I should begin?
[15:30] <glambert> any pattern to it? do they get high io time after boot? or at a specific time or time of day?
[15:31] <mozg> glambert, nope, i can't find any pattern at all
[15:31] <mozg> the storage cluster is not busy at all
[15:31] <glambert> do all vms do it? at the same time?
[15:31] <mozg> i do have around 30 vms, but they are all idle most of the time
[15:31] <mozg> glambert, yeah, same time
[15:31] <glambert> across multiple hosts?
[15:32] <mozg> all vms across several hypervisor hosts
[15:32] <glambert> very odd
[15:32] <mozg> indeed
[15:32] <kraken> http://i.imgur.com/bQcbpki.gif
[15:32] <glambert> so the only common thing that they are all using/sharing is Ceph?
[15:32] <glambert> apart from the network which you've ruled out
[15:32] <mozg> that's correct
[15:33] <mozg> they are using different hypervisor / libvirt versions
[15:33] <mozg> but all of them are running kvm
[15:33] <mozg> ceph -w and ceph -s do not show any issues
[15:33] <glambert> sorry, not really sure what to advise
[15:33] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[15:34] <mozg> i've initially thought that this happens during the scrubbing / deep scrubbing period as the problems coincided on several occasions
[15:34] <mozg> but they also happen when there is no scrubbing as well
[15:34] <mozg> (((
[15:38] <thorus> mozg are you using xfs? we had the problem that defragmented files slowed it down horribly
[15:39] <mozg> thorus, yes I am
[15:39] <mozg> xfs throughout my cluster
[15:39] <mozg> i've got two osd servers with 17 osds and 3 mons
[15:39] <mozg> all osds are using xfs and I've got around 4 osd journals per ssd disk
[15:40] <mozg> thorus, could you tell me how to check the fragmenttion status of the xfs fs
[15:40] <mozg> and what levels are considered to be high
[15:40] <thorus> mozg we had 50% defrag at some osds
[15:40] <mozg> i've got 3TB osd disks throughout my cluster
[15:42] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[15:43] <mozg> i've noticed that i occasionally have slow requests in the logs
[15:44] <mozg> but the performance issues do not always correlate to the slow requests
[15:44] * lx0 is now known as lxo
[15:45] <mozg> the slow requests seem to be the same:
[15:45] <mozg> 2013-12-27 06:37:32.818166 osd.7 192.168.168.200:6821/15021 1905 : [WRN] slow request 34.451390 seconds old, received at 2013-12-27 06:36:58.366569: osd_op(client.2437174.0:107061 rbd_data.1b9dc72ae8944a.0000000000000048 [write 3129344~24576] 5.657dfd8e e34357) v4 currently waiting for subops from [9]
[15:45] <mozg> but different osds
[15:54] <mozg> my fragmentation is about 25%
[15:54] <mozg> is this a lot?
[15:56] * madkiss (~madkiss@089144201228.atnat0010.highway.a1.net) Quit (Ping timeout: 480 seconds)
[15:57] * dpippenger (~riven@cpe-198-72-154-134.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:58] <thorus> dunno really^^ I just wanted to give our experiences with defrag
[15:58] <thorus> and it is in the docu that it can cause performance issues
[15:58] <thorus> and at us it solved the problem
[16:05] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) Quit (Remote host closed the connection)
[16:06] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) has joined #ceph
[16:09] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[16:09] * BillK (~BillK-OFT@124.149.111.175) Quit (Ping timeout: 480 seconds)
[16:14] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) Quit (Ping timeout: 480 seconds)
[16:18] * sroy (~sroy@208.88.110.46) Quit (Ping timeout: 480 seconds)
[16:29] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) has joined #ceph
[16:29] * DarkAceZ (~BillyMays@50-32-49-202.drr01.hrbg.pa.frontiernet.net) Quit (Ping timeout: 480 seconds)
[16:31] * mattbenjamin (~matt@76-206-42-105.lightspeed.livnmi.sbcglobal.net) Quit (Quit: Leaving.)
[16:34] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[16:37] * zidarsk8 (~zidar@89-212-28-144.dynamic.t-2.net) has joined #ceph
[16:42] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[16:43] * DarkAceZ (~BillyMays@50-32-31-24.drr01.hrbg.pa.frontiernet.net) has joined #ceph
[16:44] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) Quit (Read error: Operation timed out)
[16:45] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) has joined #ceph
[16:45] <ron-slc> on MDS startup, I see many messages similar to this: "mds.0 [WRN] ino 100000b74b5" This warning is extremely non-specific, does anybody know what it means?? "mds wrn ino" seems to short for a successful Google search
[16:47] <joao> ron-slc, crank up mds debugging; maybe it'll be more verbose then
[16:52] * i_m (~ivan.miro@95.180.8.206) Quit (Read error: Operation timed out)
[16:54] * sagelap1 (~sage@2600:1012:b02e:fd15:94da:65ac:7e18:cadd) has joined #ceph
[16:55] * Muhlemmer (~kvirc@cable-90-50.zeelandnet.nl) has joined #ceph
[16:59] * fouxm (~fouxm@AOrleans-258-1-53-232.w90-24.abo.wanadoo.fr) Quit (Remote host closed the connection)
[17:01] * sagelap (~sage@cpe-23-242-158-79.socal.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:01] * mattbenjamin (~matt@aa2.linuxbox.com) has joined #ceph
[17:03] * mozg (~andrei@46.229.149.194) Quit (Ping timeout: 480 seconds)
[17:06] * mtanski (~mtanski@69.193.178.202) has joined #ceph
[17:09] * ebo^ (~ebo@p200300624F2DA901CD2EC689F9F2A86A.dip0.t-ipconnect.de) has joined #ceph
[17:17] * BManojlovic (~steki@91.195.39.5) Quit (Quit: Ja odoh a vi sta 'ocete...)
[17:17] * hjjg_ (~hg@p3EE31F97.dip0.t-ipconnect.de) Quit (Read error: Operation timed out)
[17:19] * sagelap (~sage@38.122.20.226) has joined #ceph
[17:22] * sroy (~sroy@ip-208-88-110-45.savoirfairelinux.net) Quit (Read error: Operation timed out)
[17:27] * sagelap1 (~sage@2600:1012:b02e:fd15:94da:65ac:7e18:cadd) Quit (Ping timeout: 480 seconds)
[17:33] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) has joined #ceph
[17:35] * alphe (~alphe@0001ac6f.user.oftc.net) has joined #ceph
[17:36] <alphe> hello everyone
[17:41] * jesus (~jesus@emp048-51.eduroam.uu.se) Quit (Read error: Operation timed out)
[17:46] <Psi-Jack> Hmm, I got osd's that aren't starting back up and faulting trying. :(
[17:47] <Psi-Jack> but, mons and mds was just fine starting back up. :/
[17:48] * scuttlemonkey_ (~scuttlemo@96-42-139-47.dhcp.trcy.mi.charter.com) has joined #ceph
[17:49] <Psi-Jack> and, ceph status faults on that server having the issue, too. Gaaah.
[17:52] * scuttlemonkey (~scuttlemo@96-42-139-47.dhcp.trcy.mi.charter.com) Quit (Ping timeout: 480 seconds)
[17:53] <Psi-Jack> I upgraded my ceph servers from centos 6.4 to 6.5 and the only part truely failing is ceph-osd and ceph status/health, giving a fault.
[17:53] <Psi-Jack> 2013-12-27 11:53:08.031862 7fdcfb6fd700 0 -- :/1002728 >> 172.18.0.7:6789/0 pipe(0x7fdcf4000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fdcf4000e60).fault
[17:55] <sagewk> they aren't able to connect to hte monitor.. can you connect to that port 172.18.0.7:6789 from the osd host?
[17:56] <Psi-Jack> Yeah, I just thought of that, and am about to try it.
[17:56] <Psi-Jack> I can ping to it from other hosts.
[17:56] <Psi-Jack> I can ping 172.18.0.7 too.
[17:57] <Psi-Jack> Oh funny.
[17:57] <Psi-Jack> I fully shut down the server, and brought it back up, and now it's all dandy.
[17:57] <Psi-Jack> Except the osd's aren't running yet. :)
[17:59] * jesus (~jesus@emp048-51.eduroam.uu.se) has joined #ceph
[17:59] <Psi-Jack> Sigh.. And it didn't start the osd's because the disks aren't showing up for it.
[18:01] <Psi-Jack> There we go. All back up. It was a network issue mostly. Bleh.
[18:05] <alphe> Psi-Jack good job :)
[18:06] <alphe> I had issue too after updates to bring back osd
[18:06] <Psi-Jack> Heh, got 9 up and 9 in again, as it should be.
[18:06] <alphe> it seems that auto start is broken then the disks don t auto mount anymore and so the osd don t auto start
[18:06] <Psi-Jack> I'm still running the previous release from emperor.
[18:07] <Psi-Jack> So, I was going to ask. Will upgrading to emperor cause issues with clients still using cuttlefish, both cephfs and qemu-rbd and ceph client tools on the hosts directly talking with, but not running ceph-{mon,mds,osd) services?
[18:08] * tziOm (~bjornar@194.19.106.242) Quit (Remote host closed the connection)
[18:09] <alphe> Psi-Jack normally none issues if you do as it is told
[18:10] <Psi-Jack> Well, the cephfs clients, I can still update because those are within the vm's actually using cephfs for shared storage, like webservers. but the hypervisors don't yet have ceph emperor packages for them from Proxmox VE repos, so they'd likely stay cuttlefish.
[18:10] <alphe> but franckly I had to reinstall my ceph cluster so many times that I can t say anymore that I have a process kept from 0.62 -> 0.67 -> 0.72
[18:10] <Psi-Jack> I've never fully broken a ceph cluster since I started 1 year ago.
[18:10] <alphe> I made cuttle fish -> dumpling -> emperor updates
[18:10] <Psi-Jack> i had one OSD fault so badly that I had to dump and reimport the 1 OSD, but besides that, no issue.
[18:11] <Psi-Jack> Ooh, dumpling's between cuttlefish and emperor?
[18:11] <alphe> Psi-Jack sometimes it is better to start a new cluster when things are too messy and you spend abnormal time maintaining the cluster
[18:11] <Psi-Jack> alphe: I can't really do that. :)
[18:11] <alphe> or was it dumpling then cuttlefish then emperor
[18:11] <Psi-Jack> Yeah, it was dumpling, cuttlefish is 0.67.4
[18:12] <alphe> Psi-Jack yes now that my ceph cluster will enter in production i will not be able to do that neither
[18:12] * madkiss (~madkiss@089144201228.atnat0010.highway.a1.net) has joined #ceph
[18:12] <Psi-Jack> alphe: my home setup involved 4 hypervisors and 3 ceph servers. The 4 hypervisors use the ceph cluster for everything storage-wise. RBD disks for the virtual machines, etc
[18:12] <alphe> Psi-Jack you would expect naming to follow alphabetical order
[18:13] <Psi-Jack> heh, my internet routers are even virtual machines inside the 4 hypervisors.
[18:13] <alphe> great and virtualisation by the book
[18:13] <Psi-Jack> So, no ceph cluster, no internet access. :)
[18:13] <alphe> you use stuf like QEMu ?
[18:14] <Psi-Jack> Specifically, I use proxmox ve, which uses kvm.
[18:14] <Psi-Jack> I don't use openvz stuff. I don't trust it.
[18:14] <alphe> I have some virtual machines running from the ceph store too
[18:14] <Psi-Jack> heh, even my database servers run off the ceph stor.
[18:15] <alphe> Psi-Jack interesting I had a neeting today with my CEO and he asked me if ceph rbd was high I/O enough to host database
[18:16] * cronix (~cronix@5.199.139.166) has joined #ceph
[18:16] <Psi-Jack> i run PostgreSQL off it, though, I have low activity.
[18:16] <alphe> I sayd that it was the case in linux but in windows it was totally abandonned
[18:17] <Psi-Jack> Two PostgreSQL servers, and two MySQL servers. MySQl being mostly dormant, PostgreSQL being used by things like Zabbix, Drupal, and ownCloud.
[18:17] <alphe> if you want to host on ceph ms-sql server database and link them to a real windows server without proxy that will be problematic
[18:17] <alphe> Psi-Jack nice
[18:18] <Psi-Jack> Just downed my 2nd of 3rd ceph server to finish upgrading to centos 6.5 :)
[18:18] <Psi-Jack> Annnnd, success.
[18:21] <Psi-Jack> yeah, I've gone from ceph, just before dumpling, started with a non-lts version then updated to dumpling when it was ready. Then cuttlefish after that. Now I need to make the jump to emperor sometime soon. :)
[18:23] <Psi-Jack> One funny aspect of my ceph cluster is the differences in OSD disks between them. 3 servers. All 3 have SSD's backing the ceph-journal and XFS logdev, while 2 of the three have a 1TB, a 512GB, and 256GB HDD, 1 has 2 TB, and 1 512 GBs. They used to all be the same, but a 256GB died, and thus, got upgraded.
[18:26] <Psi-Jack> I've heard the nightmare stories of filling up an OSD disk, but I've never hit that because I monitor all the space on all disks/mount-points, and I allocate CRUSH properly to prevent it. :)
[18:27] <cmdrk> can you set public addr and cluster addr for an mds i nthe config file?
[18:30] <Psi-Jack> Annnnd, now. finally, all 3 ceph servers are upgraded to centos 6.5 and back to normal. :)
[18:33] * cronix (~cronix@5.199.139.166) Quit (Read error: Operation timed out)
[18:38] * fouxm (~fouxm@AOrleans-258-1-53-232.w90-24.abo.wanadoo.fr) has joined #ceph
[18:40] * angdraug (~angdraug@c-67-169-181-128.hsd1.ca.comcast.net) has joined #ceph
[18:52] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) Quit (Remote host closed the connection)
[18:52] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) has joined #ceph
[18:54] * mtanski (~mtanski@69.193.178.202) Quit (Quit: mtanski)
[18:58] <alphe> cmdrk yes you can
[18:58] <alphe> cmdrk it is more like a general scope of ip
[18:58] <cmdrk> i see
[18:58] <cmdrk> i am trying to add a public IP to a currently existing mds but not having much luck mounting it
[18:58] <alphe> and your mds has to get in both scopes of ips
[18:59] <alphe> cmdrk because the mds was already inserted in the mdsmap
[18:59] <alphe> so you have to add it
[18:59] <cmdrk> gotcha
[18:59] <alphe> and optionally remove the old mds entry
[19:00] <cmdrk> i was thinking about adding a 2nd mds and then bringing down the first one
[19:00] <cmdrk> 2nd mds already having the pub ip config
[19:00] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) Quit (Ping timeout: 480 seconds)
[19:00] <alphe> ceph mds help will help you
[19:02] <alphe> its strange some times my ceph -s gives me 1 pool which is correct and the next time it gives me 4 pools in the pgmap stat line that is weird
[19:09] * scuttlemonkey_ is now known as scuttlemonkey
[19:20] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[19:28] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:35] * jbd_ (~jbd_@2001:41d0:52:a00::77) has left #ceph
[19:42] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[19:52] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) has joined #ceph
[19:53] * Pedras (~Adium@216.207.42.132) has joined #ceph
[20:00] * \ask (~ask@oz.develooper.com) Quit (Ping timeout: 480 seconds)
[20:02] * noahmehl (~noahmehl@cpe-71-67-115-16.cinci.res.rr.com) Quit (Quit: noahmehl)
[20:09] * Pedras (~Adium@216.207.42.132) Quit (Remote host closed the connection)
[20:19] <cmdrk> hrm, still can't quite figure out how to modify the mdsmap to get the public IP in there
[20:20] * allsystemsarego_ (~allsystem@86.121.85.58) has joined #ceph
[20:20] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[20:20] * allsystemsarego_ (~allsystem@86.121.85.58) Quit ()
[20:22] * allsystemsarego (~allsystem@5-12-240-107.residential.rdsnet.ro) Quit (Read error: Operation timed out)
[20:27] * sroy (~sroy@2607:fad8:4:6:3e97:eff:feb5:1e2b) Quit (Quit: Quitte)
[20:28] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[20:32] * \ask (~ask@oz.develooper.com) has joined #ceph
[20:44] <cmdrk> bah.. just tried to add a mon and ceph had a meltdown.. hung on adding the mon with "ceph add mon" and then 'ceph -s' etc stopped working
[20:52] <cmdrk> http://pastebin.com/iLB5FpnV now strace on 'ceph -s' just gives me "Timeout" over ando ver
[20:52] <cmdrk> i have the mon running, nothing else
[21:00] * KindTwo (KindOne@50.96.230.63) has joined #ceph
[21:01] * KindOne (KindOne@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[21:01] * KindTwo is now known as KindOne
[21:06] <cmdrk> tried to re-inject the old monmap.. nothing, still just spits "timeout" in strace. no fault messages or anything like that
[21:08] * madkiss (~madkiss@089144201228.atnat0010.highway.a1.net) Quit (Quit: Leaving.)
[21:18] <cmdrk> trying to start an OSD hangs too
[21:18] <cmdrk> http://pastebin.com/3FNnC2qp
[21:19] * cmdrk wrecked this thing!
[21:20] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[21:28] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[21:56] * alphe (~alphe@0001ac6f.user.oftc.net) Quit (Quit: Leaving)
[21:58] * JC (~JC@71-94-44-243.static.trlk.ca.charter.com) has joined #ceph
[22:20] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[22:20] * carif (~mcarifio@cpe-74-78-54-137.maine.res.rr.com) Quit (Read error: Connection reset by peer)
[22:26] * ebo^ (~ebo@p200300624F2DA901CD2EC689F9F2A86A.dip0.t-ipconnect.de) Quit (Quit: Verlassend)
[22:29] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[22:29] * bjornar (~bjornar@ti0099a340-dhcp0395.bb.online.no) has joined #ceph
[22:29] <bjornar> Is there a max number of values returned for getomapvals ?
[22:31] <bjornar> if not, there seems to be a bug
[22:31] * ScOut3R_ (~ScOut3R@c83-253-234-122.bredband.comhem.se) has joined #ceph
[22:33] <bjornar> what happends is: I first set 511 keys to some val (omap), then I can return all key/val with getomapvals.. then I set the first 511 again, and add another 4489 keys to a total of 5000 ... when I do getomapvals now, I only get first 511.. but with new values..
[22:33] <bjornar> I can list all the keys with getomapkeys, and also fetch individual keys up to 5000
[22:33] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) Quit (Ping timeout: 480 seconds)
[22:36] * lightspeed (~lightspee@82-68-190-217.dsl.in-addr.zen.co.uk) has joined #ceph
[22:37] * tomaw_ (tom@tomaw.noc.oftc.net) has joined #ceph
[22:38] <joshd> bjornar: do you mean you're running 'rados listomapvals'? or is this an earlier version that called it 'getomapvals'?
[22:43] <bjornar> listomapvals
[22:44] <bjornar> sorry for typos
[22:46] <joshd> that'd be a bug in the rados tool then, it's meant to show all of them (fetching from the cluster in 512 at a time)
[22:48] * tomaw_ (tom@tomaw.noc.oftc.net) Quit (Quit: Quit)
[22:49] * tomaw_ (tom@tomaw.noc.oftc.net) has joined #ceph
[22:54] <bjornar> ok, so will you pass if upstream?
[22:54] <bjornar> I only get the first 512 them..
[22:55] <joshd> if you could file a bug at tracker.ceph.com that'd be great
[22:59] <bjornar> joshd, rados/librados is not a project on tracker
[23:02] <cmdrk> well, i don't know what i did but i fixed my pool. did a "ceph-mon -i a --extract-monmap", then "monmaptool --rm [bad mon ID]", then "ceph-mon -i a --inject-monmap" and it seems happy now. \o/
[23:02] <cmdrk> the monmap i extracted via "ceph mon getmap" + monmap tool + inject dance didnt work..
[23:03] * rudolfsteiner (~federicon@200.68.116.185) has joined #ceph
[23:03] * tomaw_ (tom@tomaw.noc.oftc.net) Quit (Quit: Quit)
[23:04] * tomaw_ (tom@basil.tomaw.net) has joined #ceph
[23:11] * BillK (~BillK-OFT@124.149.111.175) has joined #ceph
[23:18] * ScOut3R_ (~ScOut3R@c83-253-234-122.bredband.comhem.se) Quit (Remote host closed the connection)
[23:20] <joshd> bjornar: it can go in the ceph project, with rados: as the start of the title
[23:21] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) has joined #ceph
[23:24] * rendar (~s@host83-179-dynamic.56-79-r.retail.telecomitalia.it) Quit ()
[23:29] * sleinen (~Adium@77-58-245-10.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[23:34] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[23:34] * ScOut3R (~ScOut3R@c83-253-234-122.bredband.comhem.se) has joined #ceph
[23:41] <andreask> if I use ceph-deploy with dm-crypt flag shouldn't the separate journal device also be on a dm-crypt device finally ... and not using the journal partiton directly?
[23:43] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[23:56] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Quit: Ja odoh a vi sta 'ocete...)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.