#ceph IRC Log

Index

IRC Log for 2016-09-12

Timestamps are in GMT/BST.

[0:09] * srk (~Siva@2605:6000:ed04:ce00:39b0:7710:c3db:7735) Quit (Ping timeout: 480 seconds)
[0:10] * doppelgrau1 (~doppelgra@132.252.235.172) has joined #ceph
[0:12] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[0:13] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) has joined #ceph
[0:16] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[0:21] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[0:25] * rendar (~I@host180-183-dynamic.46-79-r.retail.telecomitalia.it) Quit (Quit: std::lower_bound + std::less_equal *works* with a vector without duplicates!)
[0:30] * danieagle (~Daniel@187.10.25.218) Quit (Quit: Obrigado por Tudo! :-) inte+ :-))
[0:31] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[0:32] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[0:38] * praveen (~praveen@122.172.62.196) has joined #ceph
[0:40] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[0:46] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[0:47] * LiamMon_ (~liam.monc@94.11.172.171) has joined #ceph
[0:48] * garphy`aw is now known as garphy
[0:49] * Kingrat (~shiny@2605:6000:1526:4063:31c0:3a54:e54b:89dd) Quit (Remote host closed the connection)
[0:49] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[0:50] * Kingrat (~shiny@2605:6000:1526:4063:ecdf:a098:2871:dc2c) has joined #ceph
[0:52] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[0:52] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) has joined #ceph
[0:53] * LiamMon (~liam.monc@94.14.193.69) Quit (Ping timeout: 480 seconds)
[0:56] * dougf_ (~dougf@75-131-32-223.static.kgpt.tn.charter.com) Quit (Ping timeout: 480 seconds)
[1:01] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[1:04] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[1:13] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[1:16] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[1:17] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[1:19] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[1:19] * cnf (~cnf@2a02:1807:3920:400:6c64:5c8e:8244:1fdb) Quit (Quit: My MacBook Air has gone to sleep. ZZZzzz???)
[1:20] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:31] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[1:36] * Skaag (~lunix@65.200.54.234) has joined #ceph
[1:36] * Skaag (~lunix@65.200.54.234) Quit ()
[1:38] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:38] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) has joined #ceph
[1:40] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[1:43] * Skaag (~lunix@65.200.54.234) has joined #ceph
[1:49] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:49] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) has joined #ceph
[1:51] * oms101_ (~oms101@p20030057EA4FC600C6D987FFFE4339A1.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[1:59] * oms101_ (~oms101@p20030057EA025200C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[2:05] * garphy is now known as garphy`aw
[2:30] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[2:35] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[2:39] * garphy`aw is now known as garphy
[2:40] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) has joined #ceph
[3:02] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[3:04] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[3:16] * masber (~masber@129.94.15.152) Quit (Quit: Leaving)
[3:17] * garphy is now known as garphy`aw
[3:19] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:19] * jfaj__ (~jan@p4FC25CA9.dip0.t-ipconnect.de) has joined #ceph
[3:22] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) has joined #ceph
[3:24] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[3:26] * jfaj_ (~jan@p4FC24ADE.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:26] * curtis864 (~Esvandiar@46.166.190.237) has joined #ceph
[3:29] * aj__ (~aj@x4db28402.dyn.telefonica.de) has joined #ceph
[3:30] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:31] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) has joined #ceph
[3:36] * derjohn_mobi (~aj@x590e0bde.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[3:39] * Skaag (~lunix@65.200.54.234) Quit (Quit: Leaving.)
[3:41] * kefu_ (~kefu@114.92.125.128) has joined #ceph
[3:42] * sebastian-w_ (~quassel@212.218.8.139) Quit (Remote host closed the connection)
[3:42] * sebastian-w (~quassel@212.218.8.138) has joined #ceph
[3:53] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[3:54] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) has joined #ceph
[3:56] * curtis864 (~Esvandiar@46.166.190.237) Quit ()
[3:56] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:58] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:03] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[4:05] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:08] * shyu (~Frank@218.241.172.114) Quit (Ping timeout: 480 seconds)
[4:09] * doppelgrau1 (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[4:13] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[4:13] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) has joined #ceph
[4:19] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[4:22] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:32] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[4:33] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:43] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[4:44] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) has joined #ceph
[4:47] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) has joined #ceph
[4:48] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[4:50] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[4:52] * Jeffrey4l (~Jeffrey@119.251.244.23) Quit (Remote host closed the connection)
[4:58] * Jeffrey4l (~Jeffrey@119.251.244.23) has joined #ceph
[5:13] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[5:15] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) has joined #ceph
[5:22] * Vacuum__ (~Vacuum@88.130.209.206) has joined #ceph
[5:24] <chengpeng_> clear
[5:24] <chengpeng_> #clear
[5:24] * chengpeng_ (~chengpeng@180.168.197.98) Quit (Quit: Leaving)
[5:29] * Vacuum_ (~Vacuum@88.130.202.250) Quit (Ping timeout: 480 seconds)
[5:33] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[5:36] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[5:38] * tgmedia (~tom@202.14.217.2) has joined #ceph
[5:39] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[5:46] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[5:47] * tgmedia (~tom@202.14.217.2) has left #ceph
[5:48] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) has joined #ceph
[5:48] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[5:53] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[5:55] * bara (~bara@121.244.54.198) has joined #ceph
[5:57] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[5:58] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) has joined #ceph
[6:00] * bara_ (~bara@121.244.54.198) has joined #ceph
[6:00] * bara_ (~bara@121.244.54.198) Quit (Remote host closed the connection)
[6:02] * mattbenjamin (~mbenjamin@125.16.34.66) has joined #ceph
[6:02] * kefu_ (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[6:03] * kefu_ (~kefu@114.92.125.128) has joined #ceph
[6:07] * clarjon1 (~Izanagi@108.61.123.67) has joined #ceph
[6:07] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[6:07] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[6:10] * walcubi_ (~walcubi@p5795B0BD.dip0.t-ipconnect.de) has joined #ceph
[6:10] * mattbenjamin (~mbenjamin@125.16.34.66) Quit (Ping timeout: 480 seconds)
[6:11] * walcubi (~walcubi@p5795BF57.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:11] * bniver (~bniver@125.16.34.66) has joined #ceph
[6:13] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) has joined #ceph
[6:18] * walcubi_ (~walcubi@p5795B0BD.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:18] * flisky (~Thunderbi@106.38.61.184) has joined #ceph
[6:20] * joshd (~jdurgin@125.16.34.66) has joined #ceph
[6:20] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) has joined #ceph
[6:22] * flisky (~Thunderbi@106.38.61.184) Quit ()
[6:24] * mattbenjamin (~mbenjamin@121.244.87.117) has joined #ceph
[6:24] * aj__ (~aj@x4db28402.dyn.telefonica.de) Quit (Ping timeout: 480 seconds)
[6:28] * bara (~bara@121.244.54.198) Quit (Ping timeout: 480 seconds)
[6:30] * jcsp (~jspray@125.16.34.66) has joined #ceph
[6:37] * clarjon1 (~Izanagi@635AAAJOS.tor-irc.dnsbl.oftc.net) Quit ()
[7:00] * Jeffrey4l_ (~Jeffrey@221.195.212.34) has joined #ceph
[7:00] * Jeffrey4l (~Jeffrey@119.251.244.23) Quit (Read error: Connection reset by peer)
[7:04] * bara (~bara@125.16.34.66) has joined #ceph
[7:08] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[7:08] * TomasCZ (~TomasCZ@yes.tenlab.net) Quit (Quit: Leaving)
[7:11] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) has joined #ceph
[7:18] * rwheeler (~rwheeler@125.16.34.66) has joined #ceph
[7:34] * nils_ (~nils_@doomstreet.collins.kg) has joined #ceph
[7:35] * karnan (~karnan@106.51.141.117) has joined #ceph
[7:41] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[7:41] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) has joined #ceph
[7:47] * mattbenjamin1 (~mbenjamin@121.244.87.117) has joined #ceph
[7:47] * mattbenjamin (~mbenjamin@121.244.87.117) Quit (Read error: Connection reset by peer)
[7:48] * karnan (~karnan@106.51.141.117) Quit (Remote host closed the connection)
[7:49] * lmb (~Lars@2a02:8109:8100:1d2c:2ad2:44ff:fedf:3318) Quit (Quit: Leaving)
[7:52] * rdas (~rdas@121.244.87.116) has joined #ceph
[7:58] * Phase (~Azru@213.61.149.100) has joined #ceph
[7:58] * Phase is now known as Guest133
[8:02] * Kurt (~Adium@2001:628:1:5:983f:a4ca:a2bb:967f) has joined #ceph
[8:05] * mattbenjamin1 (~mbenjamin@121.244.87.117) Quit (Ping timeout: 480 seconds)
[8:10] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Ping timeout: 480 seconds)
[8:11] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[8:11] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) has joined #ceph
[8:11] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[8:15] * kuku (~kuku@124.104.90.54) has joined #ceph
[8:20] * mattbenjamin (~mbenjamin@125.16.34.66) has joined #ceph
[8:21] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[8:21] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) has joined #ceph
[8:26] * sleinen (~Adium@macsl.switch.ch) has joined #ceph
[8:26] * aj__ (~aj@88.128.80.147) has joined #ceph
[8:28] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[8:28] * Guest133 (~Azru@635AAAJP4.tor-irc.dnsbl.oftc.net) Quit ()
[8:28] * briner (~briner@129.194.16.54) has joined #ceph
[8:36] * kefu_ (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[8:37] * kefu_ (~kefu@114.92.125.128) has joined #ceph
[8:40] * Hemanth (~hkumar_@121.244.87.117) has joined #ceph
[8:41] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) Quit (Quit: because)
[8:41] * kuku (~kuku@124.104.90.54) Quit (Read error: Connection reset by peer)
[8:42] * kuku (~kuku@124.104.90.54) has joined #ceph
[8:43] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[8:45] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) has joined #ceph
[8:48] * KindOne_ (kindone@h107.226.28.71.dynamic.ip.windstream.net) has joined #ceph
[8:48] * kuku (~kuku@124.104.90.54) Quit (Remote host closed the connection)
[8:50] * kuku (~kuku@124.104.90.54) has joined #ceph
[8:52] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) Quit (Remote host closed the connection)
[8:53] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) has joined #ceph
[8:54] * bara (~bara@125.16.34.66) Quit (Ping timeout: 480 seconds)
[8:55] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[8:55] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[8:55] * KindOne_ is now known as KindOne
[8:56] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) Quit (Quit: Leaving)
[8:57] * fsimonce (~simon@host98-71-dynamic.1-87-r.retail.telecomitalia.it) has joined #ceph
[8:57] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) has joined #ceph
[9:02] <IcePic> TheSov: I wonder if they know how much linux stuff they use, if you look at all the iscsi,nfs,smb storage things, and of course all network mgmt heads, fiberchannel admin hosts, wifi controllers and so on.. The list is rather long by now.
[9:06] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[9:06] * aj__ (~aj@88.128.80.147) Quit (Ping timeout: 480 seconds)
[9:08] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:10] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) has joined #ceph
[9:23] * kuku (~kuku@124.104.90.54) Quit (Remote host closed the connection)
[9:24] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[9:24] * rwheeler (~rwheeler@125.16.34.66) Quit (Ping timeout: 480 seconds)
[9:27] * bara (~bara@125.16.34.66) has joined #ceph
[9:32] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[9:36] * lmb (~Lars@tmo-096-180.customers.d1-online.com) has joined #ceph
[9:37] * aj__ (~aj@46.189.28.56) has joined #ceph
[9:42] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[9:43] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[9:53] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[9:55] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) has joined #ceph
[9:56] * TMM (~hp@dhcp-077-248-009-229.chello.nl) Quit (Quit: Ex-Chat)
[9:57] * kuku (~kuku@124.104.90.54) has joined #ceph
[9:57] * DanFoster (~Daniel@office.34sp.com) has joined #ceph
[10:01] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) has joined #ceph
[10:02] * kuku (~kuku@124.104.90.54) Quit (Remote host closed the connection)
[10:04] * hybrid512 (~walid@195.200.189.206) has joined #ceph
[10:05] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) has joined #ceph
[10:07] * lmb_ (~Lars@tmo-096-123.customers.d1-online.com) has joined #ceph
[10:08] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[10:09] * jcsp (~jspray@125.16.34.66) Quit (Ping timeout: 480 seconds)
[10:10] * kefu_ is now known as kefu|afk
[10:10] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:7472:ce93:221:48ca) has joined #ceph
[10:12] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit (Ping timeout: 480 seconds)
[10:12] * kefu|afk is now known as kefu_
[10:13] * lmb (~Lars@tmo-096-180.customers.d1-online.com) Quit (Ping timeout: 480 seconds)
[10:15] * lmb_ (~Lars@tmo-096-123.customers.d1-online.com) Quit (Quit: Leaving)
[10:16] * rraja (~rraja@121.244.87.117) has joined #ceph
[10:17] * picnic (~oftc-webi@tsf-484-wpa-1-091.epfl.ch) has joined #ceph
[10:19] * kuku (~kuku@124.104.90.54) has joined #ceph
[10:20] <picnic> does anyone know why when using librados, threads are created for each read/write per object? this gets out of control when a librados::Rados lives for a long time. Is there a call that cleans all these threads up? or do i have to destroy and reconnect?
[10:20] <picnic> or is there a way to prevent this? i.e. reuse threads from a pool?
[10:24] * LiamMon_ (~liam.monc@94.11.172.171) Quit (Ping timeout: 480 seconds)
[10:25] * LiamMon (~liam.monc@90.197.92.212) has joined #ceph
[10:28] * karnan (~karnan@125.16.34.66) has joined #ceph
[10:29] * KristopherBel (~Pulec@108.61.123.70) has joined #ceph
[10:30] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[10:31] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[10:32] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) has joined #ceph
[10:42] * AlexeyAbashkin (~AlexeyAba@91.207.132.67) has joined #ceph
[10:42] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) has joined #ceph
[10:45] * TMM (~hp@185.5.121.201) has joined #ceph
[10:47] * praveen__ (~praveen@122.171.72.198) has joined #ceph
[10:50] * cetex (~oskar@nadine.juza.se) has joined #ceph
[10:51] <cetex> So, we're running 5 Dell R720-XD with 16*8TB hdd's in raid-6 on each node, and have horrible read-performance in cephfs.
[10:52] * rotbeard (~redbeard@185.32.80.238) has joined #ceph
[10:52] * praveen (~praveen@122.172.62.196) Quit (Ping timeout: 480 seconds)
[10:53] <cetex> During recovery operations (moving data around) we have quite good performance 200-800MB/second, but when i do for example "dd if=<some file on cephfs> of=/dev/zero iflag=direct bs=1024k count=10000" I get numbers like 15MB/second.
[10:54] <cetex> My guess is that the difference is related to the number of simultaneous requests, recovery does a lot of simultaneous requests while these reads probably are single-threaded and/or few requests at a time.
[10:58] * kefu_ (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[10:59] * Dw_Sn (~Dw_Sn@00020a72.user.oftc.net) has joined #ceph
[10:59] * kefu_ (~kefu@114.92.125.128) has joined #ceph
[10:59] * KristopherBel (~Pulec@108.61.123.70) Quit ()
[11:03] * rwheeler (~rwheeler@125.16.34.66) has joined #ceph
[11:04] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) has joined #ceph
[11:06] * kuku (~kuku@124.104.90.54) Quit (Remote host closed the connection)
[11:07] * praveen__ (~praveen@122.171.72.198) Quit (Ping timeout: 480 seconds)
[11:11] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[11:21] <jprins_> Hi everyone. I have been testing with Ganesha against a Ceph backend and this works fine. Now I'm looking for information about things like Distributed Lock Manager in combination with Ceph. I have read some references about this in combination with GlusterFS, but as far as I understood it, it should be backend agnostic, as long as the backend supports the correct features.
[11:22] * bara (~bara@125.16.34.66) Quit (Ping timeout: 480 seconds)
[11:22] <jprins_> so basicly, can I have DLM on Ganesha with a Ceph backend? And if this is possible, does someone have some reference documentation / configuration that I could look at.
[11:23] * kuku (~kuku@124.104.90.54) has joined #ceph
[11:24] * rwheeler (~rwheeler@125.16.34.66) Quit (Quit: Leaving)
[11:27] * bara (~bara@125.16.34.66) has joined #ceph
[11:28] * kuku (~kuku@124.104.90.54) Quit (Remote host closed the connection)
[11:31] * jcsp (~jspray@125.16.34.66) has joined #ceph
[11:33] * kuku (~kuku@124.104.90.54) has joined #ceph
[11:34] * joshd (~jdurgin@125.16.34.66) Quit (Quit: Leaving.)
[11:34] * bniver (~bniver@125.16.34.66) Quit (Remote host closed the connection)
[11:36] * dugravot6 (~dugravot6@nat-persul-plg.wifi.univ-lorraine.fr) Quit (Quit: Leaving.)
[11:36] * kuku (~kuku@124.104.90.54) Quit (Read error: Connection reset by peer)
[11:37] * kuku (~kuku@124.104.90.54) has joined #ceph
[11:42] * mattbenjamin (~mbenjamin@125.16.34.66) Quit (Ping timeout: 480 seconds)
[11:42] * jcsp (~jspray@125.16.34.66) Quit (Ping timeout: 480 seconds)
[11:43] * bara (~bara@125.16.34.66) Quit (Ping timeout: 480 seconds)
[11:53] * karnan (~karnan@125.16.34.66) Quit (Ping timeout: 480 seconds)
[11:57] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) has joined #ceph
[11:58] * mattbenjamin (~mbenjamin@125.16.34.66) has joined #ceph
[11:59] * kefu_ (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:07] * wkennington (~wkenningt@c-71-204-170-241.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[12:09] * mattbenjamin (~mbenjamin@125.16.34.66) Quit (Ping timeout: 480 seconds)
[12:15] * flisky (~Thunderbi@210.12.157.85) has joined #ceph
[12:15] * Hemanth (~hkumar_@121.244.87.117) Quit (Ping timeout: 480 seconds)
[12:16] * flisky (~Thunderbi@210.12.157.85) Quit ()
[12:26] * FidoNet (~FidoNet@91.235.44.155) Quit (Ping timeout: 480 seconds)
[12:29] * kuku (~kuku@124.104.90.54) Quit (Remote host closed the connection)
[12:41] * kuku (~kuku@124.104.90.54) has joined #ceph
[12:42] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:42] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[12:45] * praveen (~praveen@122.172.66.43) has joined #ceph
[12:50] * kuku (~kuku@124.104.90.54) Quit (Remote host closed the connection)
[12:50] * kuku (~kuku@124.104.90.54) has joined #ceph
[12:50] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Ping timeout: 480 seconds)
[12:54] * kuku (~kuku@124.104.90.54) Quit (Read error: Connection reset by peer)
[12:56] * praveen (~praveen@122.172.66.43) Quit (Ping timeout: 480 seconds)
[13:01] * sleinen1 (~Adium@130.59.94.35) has joined #ceph
[13:01] * sleinen1 (~Adium@130.59.94.35) Quit ()
[13:02] * danieagle (~Daniel@187.74.69.89) has joined #ceph
[13:02] * yankcrime (~yankcrime@185.43.216.241) has joined #ceph
[13:03] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) Quit (Remote host closed the connection)
[13:04] * kuku (~kuku@124.104.90.54) has joined #ceph
[13:07] * kuku (~kuku@124.104.90.54) Quit (Remote host closed the connection)
[13:08] * kuku (~kuku@124.104.90.54) has joined #ceph
[13:08] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[13:09] * sleinen (~Adium@macsl.switch.ch) Quit (Ping timeout: 480 seconds)
[13:09] * kuku (~kuku@124.104.90.54) Quit (Read error: Connection reset by peer)
[13:10] * kuku (~kuku@124.104.90.54) has joined #ceph
[13:17] * BlaXpirit (~irc@blaxpirit.com) has left #ceph
[13:17] * vicente (~~vicente@125-227-238-55.HINET-IP.hinet.net) Quit (Read error: Connection reset by peer)
[13:18] * LiamMon_ (~liam.monc@90.217.194.177) has joined #ceph
[13:24] * LiamMon (~liam.monc@90.197.92.212) Quit (Ping timeout: 480 seconds)
[13:27] * kuku (~kuku@124.104.90.54) Quit (Remote host closed the connection)
[13:27] * kuku (~kuku@124.104.90.54) has joined #ceph
[13:30] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[13:30] * kuku (~kuku@124.104.90.54) Quit (Read error: Connection reset by peer)
[13:34] * ivve (~zed@cust-gw-11.se.zetup.net) has joined #ceph
[13:35] * Racpatel (~Racpatel@2601:87:3:31e3::2433) has joined #ceph
[13:40] * thesix (~thesix@leifhelm.mur.at) Quit (Remote host closed the connection)
[13:50] <topro> hi there, i would like to give bluestore a try. is it possible to use bluestore on one OSD only while having the other still work with filestore. in other words, can I put the "osd objectstore = bluestore" into the cfg stanza of a single OSD or does it have to be in global section?
[13:52] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[13:56] <doppelgrau> topro: single osd should work, but beware that it is still heavy under development
[13:58] <topro> doppelgrau: I know, sage said on disk format will not be stable until kraken. so i think i will wait until kraken, but knowing that it can be tested on a single OSD makes me feel much more confident even if it might still have bugs
[13:58] <topro> am I wrong or shouldn't ceph with replication of 3 prevent a single osd from doing any harm?
[14:03] * kuku (~kuku@124.104.90.54) has joined #ceph
[14:03] <doppelgrau> topro: depends on the failure mode, simply shutting down won't do any harm, but an osd that delivers erenous data can seriously harm the PGs where it is primary
[14:08] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[14:10] * kuku (~kuku@124.104.90.54) Quit (Remote host closed the connection)
[14:11] * kuku (~kuku@124.104.90.54) has joined #ceph
[14:14] * praveen_ (~praveen@122.172.66.43) has joined #ceph
[14:15] <topro> doppelgrau: well thats a point. so data won't get verified against secondaries on reads, right? would be too much overhead anyway i assume
[14:15] * post-factum (~post-fact@vulcan.natalenko.name) Quit (Killed (NickServ (Too many failed password attempts.)))
[14:15] * post-factum (~post-fact@vulcan.natalenko.name) has joined #ceph
[14:15] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) Quit (Ping timeout: 480 seconds)
[14:17] <doppelgrau> topro: right, the client only talks to the current primary, and the primary is responsible for storing/retrieving the data => a "malicious" primary can do some harm
[14:19] * kuku (~kuku@124.104.90.54) Quit (Remote host closed the connection)
[14:19] * kuku (~kuku@124.104.90.54) has joined #ceph
[14:23] * kuku (~kuku@124.104.90.54) Quit (Read error: Connection reset by peer)
[14:28] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) Quit (Read error: Connection reset by peer)
[14:31] <topro> doppelgrau: ok, i see :/
[14:39] * ade (~abradshaw@194.169.251.11) has joined #ceph
[14:42] * bene2 (~bene@nat-pool-bos-t.redhat.com) has joined #ceph
[14:43] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[14:46] * sleinen (~Adium@2001:620:0:82::105) has joined #ceph
[14:54] * dugravot6 (~dugravot6@145.20.90.92.rev.sfr.net) has joined #ceph
[14:58] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[14:59] * Jeffrey4l_ (~Jeffrey@221.195.212.34) Quit (Quit: Leaving)
[14:59] * Jeffrey4l (~Jeffrey@221.195.212.34) has joined #ceph
[14:59] * nhm (~nhm@c-50-171-139-246.hsd1.mn.comcast.net) has joined #ceph
[14:59] * ChanServ sets mode +o nhm
[15:01] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[15:02] * ivve (~zed@cust-gw-11.se.zetup.net) Quit (Ping timeout: 480 seconds)
[15:03] * jordan_c (~jconway@cable-192.222.246.54.electronicbox.net) has joined #ceph
[15:04] * dugravot6 (~dugravot6@145.20.90.92.rev.sfr.net) Quit (Quit: Leaving.)
[15:05] * dugravot6 (~dugravot6@145.20.90.92.rev.sfr.net) has joined #ceph
[15:05] <jordan_c> can anyone recommend a nagios plugin/plugins for monitoring a ceph cluster?
[15:05] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[15:06] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) Quit ()
[15:06] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) has joined #ceph
[15:11] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[15:12] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[15:20] * johnavp1989 (~jpetrini@8.39.115.8) has joined #ceph
[15:20] <- *johnavp1989* To prove that you are human, please enter the result of 8+3
[15:21] * brad_mssw (~brad@66.129.88.50) has joined #ceph
[15:24] * jcsp (~jspray@121.244.54.198) has joined #ceph
[15:26] * picnic (~oftc-webi@tsf-484-wpa-1-091.epfl.ch) Quit (Ping timeout: 480 seconds)
[15:27] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Read error: Connection reset by peer)
[15:30] * walcubi (~walcubi@p5795B0BD.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[15:31] * dugravot6 (~dugravot6@145.20.90.92.rev.sfr.net) Quit (Ping timeout: 480 seconds)
[15:31] * EinstCrazy (~EinstCraz@61.165.253.29) has joined #ceph
[15:31] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) has joined #ceph
[15:32] <doppelgrau> jordan_c: there is one that works with cephdash, or a short shell script
[15:34] * squizzi (~squizzi@107.13.237.240) has joined #ceph
[15:37] * walcubi (~walcubi@p5099a7c3.dip0.t-ipconnect.de) has joined #ceph
[15:38] * LiamMon_ (~liam.monc@90.217.194.177) Quit (Ping timeout: 480 seconds)
[15:39] * LiamMon (~liam.monc@94.14.200.225) has joined #ceph
[15:42] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[15:46] * doppelgrau (~doppelgra@132.252.235.172) Quit (Read error: Connection reset by peer)
[15:46] * valeech (~valeech@71-83-169-82.static.azus.ca.charter.com) has joined #ceph
[15:47] * garphy`aw is now known as garphy
[15:48] * srk (~Siva@cpe-70-113-23-93.austin.res.rr.com) Quit (Ping timeout: 480 seconds)
[15:48] * doppelgrau (~doppelgra@132.252.235.172) has joined #ceph
[15:49] * rdas (~rdas@121.244.87.116) Quit (Ping timeout: 480 seconds)
[15:49] * kefu (~kefu@114.92.125.128) has joined #ceph
[15:52] * WedTM (~CoZmicShR@exit0.liskov.tor-relays.net) has joined #ceph
[15:57] * karnan (~karnan@106.51.141.117) has joined #ceph
[15:58] * valeech (~valeech@71-83-169-82.static.azus.ca.charter.com) Quit (Quit: valeech)
[16:00] * wjw-freebsd (~wjw@vpn.ecoracks.nl) has joined #ceph
[16:01] * vata1 (~vata@207.96.182.162) has joined #ceph
[16:03] * rendar (~I@host224-179-dynamic.49-79-r.retail.telecomitalia.it) has joined #ceph
[16:04] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[16:04] * davidb (~David@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[16:05] * lcurtis_ (~lcurtis@47.19.105.250) Quit (Ping timeout: 480 seconds)
[16:06] * ErifKard (~ErifKard@MTRLPQ42-1176054809.sdsl.bell.ca) has joined #ceph
[16:07] <ErifKard> Good morning Everyone!, we're trying to add another cephnode inside a existing cluster with ceph-deploy. We're using the most recent version of hammer, we're able to do the ceph-deploy disk list nodename, but when we try to prepare the osd, we always get the error bootstrap-osd keyring not found; run 'gatherkeys'.
[16:07] <ErifKard> even if we do the gatherkeys on the node itself, but the error stay there..
[16:07] <ErifKard> any clue ?
[16:07] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[16:08] <davidb> One more info: Disk Zak is working 100%, but disk prepare is shooting the keyring error
[16:08] <davidb> (zap)
[16:08] * kefu (~kefu@114.92.125.128) has joined #ceph
[16:10] * tserong (~tserong@203-214-92-220.dyn.iinet.net.au) has joined #ceph
[16:11] * wjw-freebsd (~wjw@vpn.ecoracks.nl) Quit (Ping timeout: 480 seconds)
[16:11] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[16:15] * chiluk (~quassel@172.34.213.162.lcy-01.canonistack.canonical.com) has joined #ceph
[16:15] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[16:15] <davidb> Version used Hammer 0.94.9
[16:19] * LiamMon_ (~liam.monc@94.13.40.207) has joined #ceph
[16:21] * Hemanth (~hkumar_@103.228.221.183) has joined #ceph
[16:21] * cronburg (~cronburg@nat-pool-bos-t.redhat.com) Quit (Quit: ZNC 1.6.2 - http://znc.in)
[16:22] * WedTM (~CoZmicShR@5AEAABMXO.tor-irc.dnsbl.oftc.net) Quit ()
[16:25] * LiamMon (~liam.monc@94.14.200.225) Quit (Ping timeout: 480 seconds)
[16:27] * jcsp (~jspray@121.244.54.198) Quit (Ping timeout: 480 seconds)
[16:30] <davidb> Good morning Everyone!, we're trying to add another cephnode inside a existing cluster with ceph-deploy. We're using the most recent version of hammer (0.94.9) and ceph-deploy(1.5.36), we're able to do the ceph-deploy disk list nodename, but when we try to prepare the osd, we always get the error bootstrap-osd keyring not found; run 'gatherkeys'.
[16:30] <davidb> Even if we do the gatherkeys on the node itself, but the error stay there.. any clue ? it seems to be a ceph-deploy bug or we might be missing a step because we deployed all our OSDnodes the same way as today.
[16:30] * jcsp (~jspray@121.244.54.198) has joined #ceph
[16:34] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) has joined #ceph
[16:36] * LiamMon_ (~liam.monc@94.13.40.207) Quit (Ping timeout: 480 seconds)
[16:36] * kristen (~kristen@jfdmzpr01-ext.jf.intel.com) has joined #ceph
[16:37] * LiamMon (~liam.monc@90.211.185.211) has joined #ceph
[16:37] * wushudoin (~wushudoin@2601:646:8200:c9f0:2ab2:bdff:fe0b:a6ee) has joined #ceph
[16:38] * hoonetorg (~hoonetorg@77.119.226.254.static.drei.at) has joined #ceph
[16:40] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) Quit (Quit: Leaving.)
[16:41] * dugravot6 (~dugravot6@l-p-dn-in-4a.lionnois.site.univ-lorraine.fr) has joined #ceph
[16:41] * owasserm (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[16:50] * srk (~Siva@32.97.110.55) has joined #ceph
[16:53] * homosaur (~Aramande_@108.61.123.75) has joined #ceph
[16:56] <davidb> Good morning Everyone!, we're trying to add another cephnode inside a existing cluster with ceph-deploy. We're using the most recent version of hammer (0.94.9) and ceph-deploy(1.5.36), we're able to do the ceph-deploy disk list nodename, but when we try to prepare the osd, we always get the error bootstrap-osd keyring not found; run 'gatherkeys'.
[16:56] <davidb> Even if we do the gatherkeys on the node itself, but the error stay there.. any clue ? it seems to be a ceph-deploy bug or we might be missing a step because we deployed all our OSDnodes the same way as today.
[16:56] * walcubi_ (~walcubi@p5795AFEA.dip0.t-ipconnect.de) has joined #ceph
[16:59] <SamYaple> davidb: the folder in which you execute ceph-deploy matters. are you in the same folder you were in wehn you initial bootstraped the cluster?
[17:02] <davidb> yeap
[17:02] <davidb> oh wait
[17:02] <davidb> ;)
[17:03] <davidb> Im in /etc/ceph on the monitor node
[17:04] * walcubi (~walcubi@p5099a7c3.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[17:04] * nils_____ (~nils_@82.149.255.29) has joined #ceph
[17:04] * Racpatel (~Racpatel@2601:87:3:31e3::2433) Quit (Ping timeout: 480 seconds)
[17:06] <davidb> SamYaple: I confirm that I m in the same folder as we were when deploying our node6
[17:06] <davidb> but node7 is now deployed after upgrading Hammer from 0.94.7 to 0.94.9
[17:06] * jowilkin (~jowilkin@2601:644:4000:b0bf:56ee:75ff:fe10:724e) has joined #ceph
[17:06] <davidb> and not deploying for the keyring reason
[17:06] * mattbenjamin (~mbenjamin@121.244.54.198) has joined #ceph
[17:07] <davidb> http://tracker.ceph.com/issues/16814
[17:07] <davidb> this issue is pretty similar but with rgw stuff
[17:07] <davidb> I m experiencing the same withceph-deploy 1.5.36 now
[17:09] * ade (~abradshaw@194.169.251.11) Quit (Ping timeout: 480 seconds)
[17:10] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[17:11] * nils_ (~nils_@doomstreet.collins.kg) Quit (Ping timeout: 480 seconds)
[17:11] * shaunm (~shaunm@ms-208-102-105-216.gsm.cbwireless.com) Quit (Ping timeout: 480 seconds)
[17:13] * Racpatel (~Racpatel@2601:87:3:31e3::2433) has joined #ceph
[17:16] * kmroz (~kilo@00020103.user.oftc.net) has joined #ceph
[17:19] * Dw_Sn (~Dw_Sn@00020a72.user.oftc.net) Quit (Quit: leaving)
[17:21] * valeech (~valeech@97.93.161.13) has joined #ceph
[17:23] * homosaur (~Aramande_@108.61.123.75) Quit ()
[17:23] * AotC (~Inuyasha@exit0.radia.tor-relays.net) has joined #ceph
[17:26] * nils_____ (~nils_@82.149.255.29) Quit (Quit: This computer has gone to sleep)
[17:27] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[17:27] * nils_____ (~nils_@82.149.255.29) has joined #ceph
[17:27] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) Quit (Quit: Leaving)
[17:27] * aj__ (~aj@46.189.28.56) Quit (Ping timeout: 480 seconds)
[17:28] * jdillaman (~jdillaman@pool-108-18-97-95.washdc.fios.verizon.net) has joined #ceph
[17:32] * aj__ (~aj@46.189.28.56) has joined #ceph
[17:33] * EinstCrazy (~EinstCraz@61.165.253.29) Quit (Remote host closed the connection)
[17:35] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[17:35] * nils_____ (~nils_@82.149.255.29) Quit (Ping timeout: 480 seconds)
[17:36] * Jeffrey4l_ (~Jeffrey@119.251.221.27) has joined #ceph
[17:37] * ircuser-1 (~Johnny@158.183-62-69.ftth.swbr.surewest.net) has joined #ceph
[17:39] * Jeffrey4l (~Jeffrey@221.195.212.34) Quit (Ping timeout: 480 seconds)
[17:44] * tsg (~tgohad@192.55.54.44) has joined #ceph
[17:45] * mykola (~Mikolaj@193.93.217.59) has joined #ceph
[17:46] * owasserm_ (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) has joined #ceph
[17:46] * owasserm_ (~owasserm@2001:984:d3f7:1:5ec5:d4ff:fee0:f6dc) Quit ()
[17:48] * praveen_ (~praveen@122.172.66.43) Quit (Ping timeout: 480 seconds)
[17:49] * Skaag (~lunix@65.200.54.234) has joined #ceph
[17:49] * Skaag (~lunix@65.200.54.234) Quit ()
[17:53] * AotC (~Inuyasha@26XAABVQA.tor-irc.dnsbl.oftc.net) Quit ()
[17:54] * shaunm (~shaunm@cpe-192-180-17-174.kya.res.rr.com) has joined #ceph
[17:55] * Skaag (~lunix@65.200.54.234) has joined #ceph
[17:58] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[18:00] * sudocat1 (~dibarra@192.185.1.19) has joined #ceph
[18:00] * ade (~abradshaw@tmo-103-148.customers.d1-online.com) has joined #ceph
[18:00] * sudocat2 (~dibarra@192.185.1.20) has joined #ceph
[18:01] * marrusl (~mark@nat-pool-bos-u.redhat.com) has joined #ceph
[18:02] * sudocat (~dibarra@192.185.1.20) Quit (Read error: Connection reset by peer)
[18:03] * TMM (~hp@185.5.121.201) Quit (Quit: Ex-Chat)
[18:04] * marrusl (~mark@nat-pool-bos-u.redhat.com) Quit ()
[18:04] * marrusl (~mark@nat-pool-bos-u.redhat.com) has joined #ceph
[18:08] * sudocat1 (~dibarra@192.185.1.19) Quit (Ping timeout: 480 seconds)
[18:08] * rraja (~rraja@121.244.87.117) Quit (Ping timeout: 480 seconds)
[18:09] <TheSov> IcePic, Sigh, not having a vmware plugin is really hard
[18:09] * derjohn_mob (~aj@46.189.28.95) has joined #ceph
[18:09] * aj__ (~aj@46.189.28.56) Quit (Read error: Connection reset by peer)
[18:13] * xarses (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[18:16] * dneary (~dneary@pool-96-233-46-27.bstnma.fios.verizon.net) has joined #ceph
[18:20] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[18:21] * karnan (~karnan@106.51.141.117) Quit (Quit: Leaving)
[18:22] <SamYaple> TheSov: i got thinking about this more. you would probably be better off with an iscsi target to the RBD
[18:22] <SamYaple> you can make that HA, it will be far more performant, and you _can_ grow it live
[18:22] <SamYaple> just have to run the iscsi hosts somewhere (vmware itself? with locally backed storage of course)
[18:22] * ade (~abradshaw@tmo-103-148.customers.d1-online.com) Quit (Quit: Too sexy for his shirt)
[18:23] * kefu is now known as kefu|afk
[18:23] <TheSov> ive tried using LRBD for HA iscsi, it doesnt work as well as it should
[18:23] <SamYaple> its going ot work better than cephfs + nfs, but i hear you
[18:24] <SamYaple> nothing will work well here, but i think the iscsi might be the stablest solution
[18:30] <rkeene> I thought about writing a VMware plugin for Ceph... but the VMware licensing scared me away.
[18:34] * yankcrime (~yankcrime@185.43.216.241) Quit (Ping timeout: 480 seconds)
[18:47] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) has joined #ceph
[18:52] <SamYaple> rkeene: no you looked at the licensing agreement, they still own your work and furture work. and in an intersting twist, time is cyclical and they own your past work too
[18:52] <rkeene> It certainly convinced me to never have visited vmware.com :-)
[18:53] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[18:54] <SamYaple> new from vmware "new linux distro deploying ceph and opennebula"
[18:56] * DanFoster (~Daniel@office.34sp.com) Quit (Quit: Leaving)
[18:57] * xarses (~xarses@73.93.152.171) has joined #ceph
[19:02] * hybrid512 (~walid@195.200.189.206) Quit (Remote host closed the connection)
[19:02] * KindOne_ (~KindOne@h83.224.28.71.dynamic.ip.windstream.net) has joined #ceph
[19:03] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[19:06] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[19:06] * KindOne_ is now known as KindOne
[19:07] * sudocat2 (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[19:16] * oliveiradan2 (~doliveira@67.214.238.80) Quit (Remote host closed the connection)
[19:19] * xarses (~xarses@73.93.152.171) Quit (Ping timeout: 480 seconds)
[19:21] * dneary (~dneary@pool-96-233-46-27.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[19:24] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[19:27] * Brochacho (~alberto@97.93.161.13) has joined #ceph
[19:32] * phyphor (~DJComet@91.108.183.162) has joined #ceph
[19:33] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[19:37] * oliveiradan (~doliveira@137.65.133.10) has joined #ceph
[19:44] * marrusl (~mark@nat-pool-bos-u.redhat.com) has left #ceph
[19:47] * vasu (~vasu@c-73-231-60-138.hsd1.ca.comcast.net) Quit (Quit: Leaving)
[19:56] * TomasCZ (~TomasCZ@yes.tenlab.net) has joined #ceph
[19:59] * georgem (~Adium@206.108.127.16) has joined #ceph
[20:02] * davidzlap (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[20:02] * phyphor (~DJComet@5AEAABM2X.tor-irc.dnsbl.oftc.net) Quit ()
[20:03] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Read error: Connection reset by peer)
[20:03] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[20:03] * garphy is now known as garphy`aw
[20:09] * kefu|afk (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[20:10] * derjohn_mob (~aj@46.189.28.95) Quit (Ping timeout: 480 seconds)
[20:10] * rraja (~rraja@122.179.4.117) has joined #ceph
[20:12] * kefu (~kefu@114.92.125.128) has joined #ceph
[20:18] * rotbeard (~redbeard@185.32.80.238) Quit (Quit: Leaving)
[20:19] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[20:20] * branto (~branto@ip-78-102-208-181.net.upcbroadband.cz) Quit (Quit: Leaving.)
[20:20] * valeech (~valeech@97.93.161.13) Quit (Quit: valeech)
[20:21] * kefu (~kefu@114.92.125.128) has joined #ceph
[20:26] * valeech (~valeech@97.93.161.13) has joined #ceph
[20:27] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:7472:ce93:221:48ca) Quit (Ping timeout: 480 seconds)
[20:30] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) Quit (Ping timeout: 480 seconds)
[20:32] * dyasny (~dyasny@cable-192.222.152.136.electronicbox.net) has joined #ceph
[20:32] * TMM (~hp@dhcp-077-248-009-229.chello.nl) has joined #ceph
[20:32] * Brochacho (~alberto@97.93.161.13) Quit (Remote host closed the connection)
[20:33] * shaunm (~shaunm@cpe-192-180-17-174.kya.res.rr.com) Quit (Ping timeout: 480 seconds)
[20:37] * doppelgrau (~doppelgra@132.252.235.172) Quit (Quit: Leaving.)
[20:41] * zviratko (~Xa@108.61.123.75) has joined #ceph
[20:42] * dneary (~dneary@nat-pool-bos-u.redhat.com) has joined #ceph
[20:44] * sudocat (~dibarra@192.185.1.20) Quit (Ping timeout: 480 seconds)
[20:49] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[20:49] * mattbenjamin (~mbenjamin@121.244.54.198) Quit (Ping timeout: 480 seconds)
[20:58] <cetex> so, i've been doing performance tests and i get really horrible performance through cephfs..
[20:58] * valeech (~valeech@97.93.161.13) Quit (Quit: valeech)
[20:59] * sudocat (~dibarra@192.185.1.20) has joined #ceph
[20:59] <cetex> it's a pretty big raid array behind the drive which can handle some beating, I wonder if it's possible to queue up more requests at once in the osd?
[21:01] <jprins_> cetex: In general, when creating a Ceph cluster, you don't put the drives behind a RAID array. This is because Ceph itself takes care of redundnacy and the RAID array is not a good option in that.
[21:02] <jprins_> cetex: Besides that, do you have multiplehosts with a bunch of drives behind a raid array?
[21:05] * haplo37 (~haplo37@199.91.185.156) has joined #ceph
[21:05] <T1> you get perofmance penaltys several places with that
[21:06] <T1> first there's the network penalty that's unaviodable since thats how ceph works
[21:08] <T1> then there are the write to raid - this causes the controller to calculate parity for every write
[21:09] <sep> evening. i have some osd's that do not want to start, I did write a userlist mail about it http://lists.ceph.com/pipermail/ceph-users-ceph.com/2016-September/012937.html ; does anyone know how to get those osd's runing again ? i have pg's down since they want to probe those osd's and hence io is hanging
[21:09] <T1> and since writes are small (typically 4k in size) your controller has to write an entire stripe to the rotating disks - perhaps 64k or 128k
[21:10] <T1> since you also said nothing about SSD journals I assume you have none
[21:10] <T1> this means that every write is sync'ed to the raid set as they come in
[21:11] <T1> this can eat any performace and - as you see - result in horrible read performance since all disks are pretty busy writing full stripes for every small ceph write
[21:11] * zviratko (~Xa@108.61.123.75) Quit ()
[21:17] <cetex> jprins_: yeah. we know
[21:17] <cetex> we ran gluster initially
[21:17] <cetex> but that was horrible, we had quite a big mess to cleanup after an upgrade so we setup ceph.
[21:19] <cetex> but yeah.. 5 Dell r730-XD, 16x8TB HGST HE8 drives per node, PERC H730P Mini controller.
[21:22] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[21:23] <cetex> those 16 drives are in raid6
[21:24] <cetex> dd if=/dev/zero of=/data/perf-test5 bs=1024k count=100000 oflag=direct
[21:24] <jprins_> cetex: You have the cluster up and running now? Then I would do the following:
[21:24] <cetex> 104857600000 bytes (105 GB) copied, 47.1848 s, 2.2 GB/s
[21:24] <cetex> reads are similar
[21:24] <cetex> yeah. give me something to go on :)
[21:25] <jprins_> - Take one OSD out of the cluster. reconfigure the drives as JBOD on the controller and put all drives back into the cluster but now with their own OSD daemon.
[21:26] <cetex> yeah. we have a plan to do that. one more node coming on thursday
[21:26] <cetex> that will serve as test-bed
[21:26] <jprins_> If you can spare some SSD drives, put a few of them in every server and put the journals for the OSD's on the SSD drive.
[21:26] <cetex> but still, i don't feel that this would explain the performance in cephfs, we get 10MB/second there..
[21:26] <T1> cetex: read what I wrote..
[21:26] <cetex> sorry, missed that. wasn't highlighted ;)
[21:27] <T1> and without ssd based journals you are in for a hard time
[21:27] <jprins_> I have 3 nodes in my testbed, all connected on 10G ethernet. I can push data like hell. Never any issue with performance.
[21:28] <jprins_> All disks JBOD. That is the way to go.
[21:29] <cetex> jprins_: right. thanks for that, it would explain some.
[21:29] <jprins_> You can test it right now. Just take one OSD out of the cluster. Wait for everything to resync. reconfigure the disks to JBOD and put them all back in.
[21:29] <cetex> err, T1 i meant your input, the stripe set info is gold.
[21:29] <cetex> we're gonna wait until thursday, it's getting quite full on those nodes as well
[21:30] <cetex> don't want to risk anything
[21:30] <T1> .. and get SSDs for journal
[21:30] <jprins_> you will see some searious improvements. Although you will only see real improvements when all nodes are converted. This is because the slowest OSD will take all performance out of the cluster.
[21:31] <T1> (but read up on the pitfalls and limits ssds come with)
[21:31] <cetex> we wouldn't go for anything smaller than Intel P3700 in that case i believe.
[21:31] <T1> basicly: as a rule of thumb no more than 8 OSDs per physical ssd
[21:31] <jprins_> T1: I have only SSD drives in my cluster :-)
[21:31] <m0zes> I've got some of the 730XD hosts with 16 8TB drives. definitely go with the ssd, preferably an intel DC P3700. and setup the the osds as single disk raid-0 arrays.
[21:32] <cetex> the object store is for quite large files, the majority is >1GB in size. Is there some way to make ceph do larger writes to the storage?
[21:32] <m0zes> s/an/two/
[21:32] <T1> and if you loose a SSD you should expect that all data in the OSDs that ssd was a journal for is also lost
[21:33] <jprins_> m0zes: Why create Raid0 arrays. Just JBOD will do the trick best.
[21:33] <sep> any of you know how to get an osd running again ? i have a few osd's from hammer 0.94.9 that does not want to start any more. no errors from dmesg or anything that indicate hardware errors.; can read files of the filesystem without problems. but tryint to start osd failes with this log output http://paste.debian.net/819581/
[21:33] <T1> my own limited tests did not show any difference between having the journal directly on the SSD or on a raid1 software raid with 2 ssds
[21:33] <m0zes> single disk raid-0 arrays let the controller utilize the raid cache as writeback when there is a BBU
[21:34] <T1> .. so I've got a set of SSDs acting as journal for a few OSDs
[21:34] <m0zes> if you trust the controller, that is ;)
[21:34] <cetex> :)
[21:34] <T1> and it's got a battery backed cache!
[21:35] <T1> beware of PERC controllers that change from writeback to writethrough while the battery is charging
[21:36] <cetex> but there's no way to make the osd make larger writes than a few KB at a time? :)
[21:36] <m0zes> the only downside (that I've experienced) with the writeback cache is that the controller doesn't know what to do with it upon reboot with a failed disk. manual intervention to discard the cached data from a broken osd.
[21:37] <cetex> And, another thing, we had no throughput going on on the nodes, and then did those reads in cephfs, 10MB/second still expected?
[21:37] * squizzi_ (~squizzi@107.13.237.240) has joined #ceph
[21:37] <m0zes> cetex: the writes to the osd itself should be ~4MB by default. the journals tend to have smaller writes because of O_DSYNC.
[21:37] <cetex> aah, right
[21:38] <cetex> so then the cephfs reads should be fast?
[21:38] <cetex> in that case it shouldn't be an issue with the setup of the drives, it feels like something isn't asking for enough data somewhere..
[21:38] <m0zes> should be. I get reads in the multiple GB/s range for hot data. 26 730XD hosts, though.
[21:39] <T1> for cephfs data flow directly from the OSDs to the clients
[21:39] <T1> flows even
[21:39] <cetex> yeah. cephfs is running on the same nodes here..
[21:40] <T1> so a 200MB file can be read from 50 different OSDs
[21:40] * brad_mssw (~brad@66.129.88.50) Quit (Quit: Leaving)
[21:40] <cetex> 10Gbit network between the nodes, all nodes in the same rack, latency <1ms
[21:40] * squizzi (~squizzi@107.13.237.240) Quit (Quit: bye)
[21:40] <cetex> <0.05ms..
[21:41] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[21:44] * squizzi (~squizzi@107.13.237.240) has joined #ceph
[21:50] * squizzi_ (~squizzi@107.13.237.240) Quit (Ping timeout: 480 seconds)
[21:54] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) Quit (Remote host closed the connection)
[21:58] * valeech (~valeech@97.93.161.13) has joined #ceph
[22:01] * haplo37 (~haplo37@199.91.185.156) Quit (Remote host closed the connection)
[22:04] * rendar (~I@host224-179-dynamic.49-79-r.retail.telecomitalia.it) Quit (Ping timeout: 480 seconds)
[22:05] * georgem (~Adium@206.108.127.16) Quit (Read error: Connection reset by peer)
[22:05] * xarses (~xarses@64.124.158.3) has joined #ceph
[22:05] * georgem (~Adium@206.108.127.16) has joined #ceph
[22:06] * Brochacho (~alberto@97.93.161.13) has joined #ceph
[22:07] * ira (~ira@c-24-34-255-34.hsd1.ma.comcast.net) has joined #ceph
[22:08] * georgem (~Adium@206.108.127.16) has left #ceph
[22:13] * mykola (~Mikolaj@193.93.217.59) Quit (Quit: away)
[22:19] * srk (~Siva@32.97.110.55) Quit (Ping timeout: 480 seconds)
[22:24] * srk (~Siva@32.97.110.55) has joined #ceph
[22:30] * dneary (~dneary@nat-pool-bos-u.redhat.com) Quit (Ping timeout: 480 seconds)
[22:30] * rendar (~I@host224-179-dynamic.49-79-r.retail.telecomitalia.it) has joined #ceph
[22:31] * Hemanth (~hkumar_@103.228.221.183) Quit (Ping timeout: 480 seconds)
[22:32] * mattbenjamin (~mbenjamin@121.244.54.198) has joined #ceph
[22:40] * LiamMon (~liam.monc@90.211.185.211) Quit (Ping timeout: 480 seconds)
[22:45] * northrup (~northrup@75-146-11-137-Nashville.hfc.comcastbusiness.net) has joined #ceph
[22:45] * LiamMon (~liam.monc@90.196.42.2) has joined #ceph
[22:46] <Anticime1> cetex: there are *a lot* of osd parameters
[22:46] * rraja (~rraja@122.179.4.117) Quit (Remote host closed the connection)
[22:49] <northrup> anyone from InkTank on?
[22:50] * Anticime1 is now known as Anticimex
[22:52] <northrup> I'm having an issue where the pgs for rbd are gone due to the osd removal and crush map update
[22:53] <northrup> so I get things like "Error EAGAIN: pg 0.25 has no primary osd"
[22:53] * Brochacho (~alberto@97.93.161.13) Quit (Quit: Brochacho)
[22:55] <northrup> I tried deleting them to no luck, and then I forced a create
[22:55] <northrup> this is what I get now "pg 0.25 is stuck inactive since forever, current state creating, last acting []"
[22:56] * Brochacho (~alberto@97.93.161.13) has joined #ceph
[22:56] * sleinen (~Adium@2001:620:0:82::105) Quit (Quit: Leaving.)
[22:59] <northrup> anyone?
[23:07] * dack (~oftc-webi@gateway.ola.bc.ca) Quit (Quit: Page closed)
[23:10] * jordan_c (~jconway@cable-192.222.246.54.electronicbox.net) Quit (Ping timeout: 480 seconds)
[23:14] * ChrisHolcombe (~chris@97.93.161.13) has joined #ceph
[23:22] * jordan_c (~jconway@cable-192.222.246.54.electronicbox.net) has joined #ceph
[23:26] * sepa (~sepa@aperture.GLaDOS.info) has joined #ceph
[23:28] * dneary (~dneary@pool-96-233-46-27.bstnma.fios.verizon.net) has joined #ceph
[23:30] * seosepa (~sepa@aperture.GLaDOS.info) Quit (Ping timeout: 480 seconds)
[23:55] * dneary (~dneary@pool-96-233-46-27.bstnma.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[23:57] * vata1 (~vata@207.96.182.162) Quit (Quit: Leaving.)
[23:59] * srk (~Siva@32.97.110.55) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.