#ceph IRC Log

Index

IRC Log for 2014-03-17

Timestamps are in GMT/BST.

[0:01] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:04] * davidzlap (~Adium@ip68-5-239-214.oc.oc.cox.net) has joined #ceph
[0:06] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[0:11] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[0:14] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[0:18] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[0:20] <bdonnahue> janos_ i noticed this behavior happens if a wingle nodes drops out of the ceph osd tree
[0:20] <bdonnahue> ill be experimenting this week
[0:22] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) has joined #ceph
[0:24] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[0:25] * hasues (~hazuez@12.216.44.38) Quit (Quit: Leaving.)
[0:25] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[0:26] * arye (~arye@pool-74-102-217-160.nwrknj.fios.verizon.net) has joined #ceph
[0:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[0:28] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[0:30] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[0:31] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[0:36] * Cube (~Cube@66-87-64-224.pools.spcsdns.net) has joined #ceph
[0:39] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[0:47] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:09] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[1:17] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:18] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[1:18] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[1:19] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has left #ceph
[1:24] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Ping timeout: 480 seconds)
[1:25] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) Quit (Remote host closed the connection)
[1:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[1:42] * hasues (~hazuez@12.216.44.38) has joined #ceph
[1:51] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[1:55] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[2:02] * Cube (~Cube@66-87-64-224.pools.spcsdns.net) Quit (Quit: Leaving.)
[2:11] * yguang11 (~yguang11@2406:2000:ef96:e:48c9:99d3:3ffc:34f3) has joined #ceph
[2:12] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[2:14] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[2:16] * The_Bishop (~bishop@2001:470:50b6:0:d991:cf56:94ef:8d91) Quit (Quit: Wer zum Teufel ist dieser Peer? Wenn ich den erwische dann werde ich ihm mal die Verbindung resetten!)
[2:18] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[2:26] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[2:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[2:26] * Cube (~Cube@66-87-64-224.pools.spcsdns.net) has joined #ceph
[2:32] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[2:32] * thuc_ (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[2:32] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[2:37] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) Quit (Remote host closed the connection)
[2:37] * Psi-Jack (~Psi-Jack@psi-jack.user.oftc.net) has joined #ceph
[2:43] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) Quit (Ping timeout: 480 seconds)
[2:46] * thuc_ (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[2:47] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[2:47] * hasues (~hazuez@12.216.44.38) Quit (Quit: Leaving.)
[2:54] * thuc_ (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[2:55] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:09] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[3:09] * thuc_ (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Remote host closed the connection)
[3:09] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[3:12] * oblu (~o@62.109.134.112) Quit (Ping timeout: 480 seconds)
[3:13] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[3:16] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:17] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:17] * haomaiwang (~haomaiwan@117.79.232.172) Quit (Remote host closed the connection)
[3:17] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:18] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) has joined #ceph
[3:18] * sarob (~sarob@2601:9:7080:13a:140c:d402:a579:a314) has joined #ceph
[3:20] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[3:20] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[3:22] * semitech1ical (~adam@ip70-176-51-26.ph.ph.cox.net) Quit (Quit: leaving)
[3:23] * glzhao (~glzhao@220.181.11.232) has joined #ceph
[3:25] * fdmanana (~fdmanana@bl10-140-160.dsl.telepac.pt) Quit (Quit: Leaving)
[3:26] * sarob (~sarob@2601:9:7080:13a:140c:d402:a579:a314) Quit (Ping timeout: 480 seconds)
[3:27] * oblu (~o@62.109.134.112) has joined #ceph
[3:27] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[3:31] * haomaiwa_ (~haomaiwan@106.38.255.124) has joined #ceph
[3:33] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[3:34] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit ()
[3:35] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:36] * Cube (~Cube@66-87-64-224.pools.spcsdns.net) Quit (Quit: Leaving.)
[3:37] * erkules_ (~erkules@port-92-193-108-226.dynamic.qsc.de) has joined #ceph
[3:38] * haomaiwang (~haomaiwan@219-87-173-15.static.tfn.net.tw) Quit (Ping timeout: 480 seconds)
[3:39] * yguang11 (~yguang11@2406:2000:ef96:e:48c9:99d3:3ffc:34f3) Quit (Remote host closed the connection)
[3:39] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[3:40] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[3:44] * erkules (~erkules@port-92-193-65-62.dynamic.qsc.de) Quit (Ping timeout: 480 seconds)
[3:44] * valeech (~valeech@pool-71-171-123-210.clppva.fios.verizon.net) Quit (Quit: valeech)
[3:47] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[3:48] * The_Bishop (~bishop@f050144099.adsl.alicedsl.de) has joined #ceph
[3:48] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[3:49] * yguang11 (~yguang11@2406:2000:ef96:e:1191:322d:7bb6:131) has joined #ceph
[3:51] * zjohnson_ (~zjohnson@guava.jsy.net) Quit (Read error: Operation timed out)
[3:51] * zjohnson (~zjohnson@guava.jsy.net) has joined #ceph
[3:52] * tserong_ (~tserong@58-6-128-43.dyn.iinet.net.au) Quit (Quit: Leaving)
[3:53] * tserong_ (~tserong@203-57-208-132.dyn.iinet.net.au) has joined #ceph
[3:57] * BillK (~BillK-OFT@106-69-66-119.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[4:00] * BillK (~BillK-OFT@124-168-247-50.dyn.iinet.net.au) has joined #ceph
[4:03] * yguang11 (~yguang11@2406:2000:ef96:e:1191:322d:7bb6:131) Quit (Remote host closed the connection)
[4:04] * yguang11 (~yguang11@2406:2000:ef96:e:1191:322d:7bb6:131) has joined #ceph
[4:06] * yguang11_ (~yguang11@2406:2000:ef96:e:e929:e18f:b89e:151f) has joined #ceph
[4:08] * yguang11_ (~yguang11@2406:2000:ef96:e:e929:e18f:b89e:151f) Quit (Remote host closed the connection)
[4:09] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[4:12] * yguang11 (~yguang11@2406:2000:ef96:e:1191:322d:7bb6:131) Quit (Ping timeout: 480 seconds)
[4:15] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) has joined #ceph
[4:18] * sarob (~sarob@2601:9:7080:13a:953c:2feb:46a6:880a) has joined #ceph
[4:20] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Remote host closed the connection)
[4:20] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[4:26] * sarob (~sarob@2601:9:7080:13a:953c:2feb:46a6:880a) Quit (Ping timeout: 480 seconds)
[4:28] * yguang11 (~yguang11@vpn-nat.peking.corp.yahoo.com) Quit (Ping timeout: 480 seconds)
[4:35] * paul (~quassel@S0106c8fb267c0b17.ok.shawcable.net) has joined #ceph
[4:36] * paul is now known as Guest3584
[4:39] * paul_ (~quassel@S0106c8fb267c0b17.ok.shawcable.net) Quit (Ping timeout: 480 seconds)
[4:48] * yguang11 (~yguang11@2406:2000:ef96:e:19ad:355b:3316:ad4a) has joined #ceph
[4:51] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[4:59] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:11] * tserong_ is now known as tserong
[5:15] * Vacum (~vovo@i59F7A590.versanet.de) has joined #ceph
[5:18] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[5:21] * yguang11_ (~yguang11@vpn-nat.peking.corp.yahoo.com) has joined #ceph
[5:22] * Vacum_ (~vovo@88.130.210.177) Quit (Ping timeout: 480 seconds)
[5:22] * BillK (~BillK-OFT@124-168-247-50.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[5:24] * BillK (~BillK-OFT@124.149.106.242) has joined #ceph
[5:24] * yguang11 (~yguang11@2406:2000:ef96:e:19ad:355b:3316:ad4a) Quit (Ping timeout: 480 seconds)
[5:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[5:27] * rektide (~rektide@eldergods.com) Quit (Ping timeout: 480 seconds)
[5:31] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) has joined #ceph
[5:36] * princeholla (~princehol@p5DE96C81.dip0.t-ipconnect.de) has joined #ceph
[5:50] * princeholla (~princehol@p5DE96C81.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[6:00] * arye (~arye@pool-74-102-217-160.nwrknj.fios.verizon.net) Quit (Quit: arye)
[6:16] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) Quit (Ping timeout: 480 seconds)
[6:18] * sarob (~sarob@2601:9:7080:13a:bcab:ed72:84d5:6127) has joined #ceph
[6:21] * sarob_ (~sarob@2601:9:7080:13a:218e:bb78:3990:5371) has joined #ceph
[6:26] * sarob (~sarob@2601:9:7080:13a:bcab:ed72:84d5:6127) Quit (Ping timeout: 480 seconds)
[6:29] * sarob_ (~sarob@2601:9:7080:13a:218e:bb78:3990:5371) Quit (Ping timeout: 480 seconds)
[6:33] * Cube (~Cube@12.248.40.138) has joined #ceph
[6:35] * Cube (~Cube@12.248.40.138) Quit (Read error: Connection reset by peer)
[6:35] * Cube1 (~Cube@12.248.40.138) has joined #ceph
[6:38] * sputnik13 (~sputnik13@wsip-68-105-248-60.sd.sd.cox.net) Quit (Quit: Textual IRC Client: www.textualapp.com)
[6:40] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[6:41] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit ()
[6:41] * BillK (~BillK-OFT@124.149.106.242) Quit (Ping timeout: 480 seconds)
[6:44] * madkiss (~madkiss@bzq-218-11-179.cablep.bezeqint.net) Quit (Quit: Leaving.)
[6:45] * BillK (~BillK-OFT@58-7-141-92.dyn.iinet.net.au) has joined #ceph
[7:00] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[7:00] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[7:06] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Quit: Leaving.)
[7:13] * swizgard_ (~swizgard@port-87-193-133-18.static.qsc.de) has joined #ceph
[7:13] * swizgard (~swizgard@port-87-193-133-18.static.qsc.de) Quit (Read error: Connection reset by peer)
[7:13] * julian_ (~julianwa@125.70.135.123) has joined #ceph
[7:18] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[7:23] * capri (~capri@212.218.127.222) Quit (Ping timeout: 480 seconds)
[7:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[7:27] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) has joined #ceph
[7:44] * madkiss (~madkiss@80.74.101.226) has joined #ceph
[7:45] * jnq (~jon@0001b7cc.user.oftc.net) Quit (Ping timeout: 480 seconds)
[7:45] * root__ (~root@176.28.50.139) has joined #ceph
[7:45] * jnq (~jon@95.85.22.50) has joined #ceph
[7:47] * root (~root@176.28.50.139) Quit (Ping timeout: 480 seconds)
[7:47] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) Quit (Ping timeout: 480 seconds)
[7:58] * mattt (~textual@94.236.7.190) has joined #ceph
[8:05] * madkiss (~madkiss@80.74.101.226) Quit (Quit: Leaving.)
[8:17] * hasues (~hazuez@108-236-232-243.lightspeed.knvltn.sbcglobal.net) Quit (Quit: Leaving.)
[8:18] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[8:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[8:27] <nuved> hello all
[8:28] <nuved> i deploy ceph on 3 host with 6 osd
[8:28] <nuved> all thing is ok except writing and reading speed is very slow
[8:29] <nuved> fs is btrfs and mounted with noatime
[8:30] <nuved> we have 2 replica and journaling fs is on same osd device
[8:31] <nuved> i set pg and pgp number on 512
[8:32] * hughsaunders (~hughsaund@wherenow.org) Quit (Ping timeout: 480 seconds)
[8:33] * hughsaunders (~hughsaund@wherenow.org) has joined #ceph
[9:16] * analbeard (~shw@141.0.32.124) has joined #ceph
[9:18] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[9:19] * Sysadmin88 (~IceChat77@176.254.32.31) Quit (Read error: Connection reset by peer)
[9:20] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) has joined #ceph
[9:26] * steki (~steki@91.195.39.5) has joined #ceph
[9:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[9:27] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[9:27] * ChanServ sets mode +v andreask
[9:28] * fretb (~fretb@frederik.pw) Quit (Quit: leaving)
[9:33] * fretb (~fretb@frederik.pw) has joined #ceph
[9:35] <pieterl> hi... my ceph cluster is stuck in degraded / remapped mode for a handful of pgs
[9:36] <pieterl> i've already restarted all osd's as seems to be the prevailing method.. i'm running emperor on ubuntu 13.04 with btrfs backing filesystems
[9:36] <pieterl> 4 nodes, 11 osd's (yeah that's pushing it) per node
[9:36] <pieterl> 2014-03-17 09:34:13.473225 mon.0 [INF] pgmap v211163: 2390 pgs: 2175 active+clean, 212 active+remapped, 3 active+degraded; 4048 GB data, 11372 GB used, 29566 GB / 40986 GB avail; 135 kB/s wr, 0 op/s; 782/2871809 objects degraded (0.027%)
[9:36] <pieterl> this happened after intruducing racks to the crushmap
[9:38] <pieterl> after restarting ALL osd's it dropped from 930 objects to the current 782.. i'd like to understand what's happening.. this is a pilot cluster, to see if we feel comfortable replacing our netapps with ceph
[9:38] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) has joined #ceph
[9:40] <pieterl> nuved: have you verified the performance of the network/disks independently of ceph?
[9:40] * thb (~me@2a02:2028:10d:49d0:6267:20ff:fec9:4e40) has joined #ceph
[9:40] <nuved> pieterl: yes, (839 MB) copied, 0.412572 s, 2.0 GB/s
[9:41] <nuved> pieterl: i use this command for writing direct on hard: dd if=/dev/zero of=/var/lib/ceph/osd/osd1/o bs=8k count=100k
[9:45] <nuved> pieterl: this my bench writing on ceph too : http://justpaste.it/erqf
[9:46] <nuved> pieterl: and this is my ceph config: http://justpaste.it/saved/3486742/0e4e03a0
[9:53] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[9:53] <darkfader> that's an async write
[9:53] <darkfader> i'm not sure if it's any good to measure with dd w/o sync
[9:56] * rakshith (~rakshith@202.3.120.5) has joined #ceph
[9:56] <rakshith> Hello Ceph Gurus
[9:56] <rakshith> had a quick query
[9:56] <rakshith> i see that ceph Supports S3 and Swift API's
[9:57] <rakshith> i am trying to understand where and how does the metadata gets handled as part of lets say S3
[9:57] <rakshith> ??
[9:57] <nuved> darkfader: with oflag=direct, it seems is ok test?
[9:58] <nuved> dd if=/dev/zero of=/var/lib/ceph/osd/osd1/o bs=1G count=1 oflag=direct
[9:58] <nuved> 1073741824 bytes (1.1 GB) copied, 5.23816 s, 205 MB/s
[9:58] <darkfader> that should be ok
[9:59] <rakshith> any leads for my query?
[10:01] <pieterl> rakshith: query ambiguous
[10:01] <rakshith> the way i assume is that the metadata associated with the object usually gets stored in some sort of database
[10:01] <pieterl> but i suggest quering ceph and swift in google
[10:02] <rakshith> so does ceph maintain some sort of key value databse to handle metadata
[10:02] <rakshith> ??
[10:02] <rakshith> @pieterl: you mean the question needs a bit of pruning ??
[10:02] <cephalobot> rakshith: Error: "pieterl:" is not a valid command.
[10:02] <rakshith> pieterl: you mean the question needs a bit of pruning ??
[10:06] * thomnico (~thomnico@2a01:e35:8b41:120:949c:a798:7300:a4ef) has joined #ceph
[10:08] * rakshith (~rakshith@202.3.120.5) has left #ceph
[10:08] * julian_ (~julianwa@125.70.135.123) Quit (Quit: afk)
[10:15] * LeaChim (~LeaChim@host86-166-182-74.range86-166.btcentralplus.com) has joined #ceph
[10:17] <nuved> pieterl: this is my test on my hardware with iperf tool http://justpaste.it/erqz
[10:18] * sarob (~sarob@2601:9:7080:13a:64d9:3f80:71b5:ccff) has joined #ceph
[10:24] * AfC (~andrew@2001:44b8:31cb:d400:6e88:14ff:fe33:2a9c) Quit (Quit: Leaving.)
[10:24] * jtang1 (~jtang@80.111.79.253) has joined #ceph
[10:25] <jtang1> leseb: good spot!
[10:25] <jtang1> probably wont do much on that role today
[10:25] <jtang1> its st patricks day in ireland ;)
[10:26] * sarob (~sarob@2601:9:7080:13a:64d9:3f80:71b5:ccff) Quit (Ping timeout: 480 seconds)
[10:27] * rakshith (~rakshith@202.3.120.5) has joined #ceph
[10:33] * TMM (~hp@sams-office-nat.tomtomgroup.com) has joined #ceph
[10:38] * barryo (~borourke@cumberdale.ph.ed.ac.uk) Quit (Quit: Leaving.)
[10:41] * allsystemsarego (~allsystem@188.26.167.156) has joined #ceph
[10:47] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[10:53] * cofol1986 (~xwrj@110.90.119.113) has joined #ceph
[10:54] <fghaas> rakshith: radosgw stores metadata in extended attributes in rados objects
[11:04] <leseb> jtang1: :)
[11:04] <leseb> jtang1: same in france :)
[11:11] * madkiss (~madkiss@80.74.101.226) has joined #ceph
[11:12] * The_Bishop (~bishop@f050144099.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[11:14] * Cube1 (~Cube@12.248.40.138) Quit (Quit: Leaving.)
[11:14] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) has joined #ceph
[11:17] * thomnico (~thomnico@2a01:e35:8b41:120:949c:a798:7300:a4ef) Quit (Quit: Ex-Chat)
[11:17] * thomnico (~thomnico@2a01:e35:8b41:120:949c:a798:7300:a4ef) has joined #ceph
[11:18] * sarob (~sarob@2601:9:7080:13a:74ba:1dd7:c1f7:dd43) has joined #ceph
[11:21] <jtang1> leseb: gah, tabs
[11:21] <jtang1> i hate tabs
[11:21] <leseb> jtang1: ^^
[11:22] <jtang1> i thought i fixed them indenting/formatting problems
[11:24] * The_Bishop (~bishop@f050144099.adsl.alicedsl.de) has joined #ceph
[11:25] <jtang1> you prefer spaces or tabs?
[11:25] * nuved (~novid@81.31.238.20) Quit (Ping timeout: 480 seconds)
[11:26] * sarob (~sarob@2601:9:7080:13a:74ba:1dd7:c1f7:dd43) Quit (Ping timeout: 480 seconds)
[11:28] <jtang1> spaces it is!
[11:31] <rakshith> fghaas: Thanks for that update
[11:32] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[11:35] * Midnightmyth (~quassel@93-167-84-102-static.dk.customer.tdc.net) Quit (Read error: Connection reset by peer)
[11:40] * b0e (~aledermue@juniper1.netways.de) has joined #ceph
[11:41] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[11:42] * mattch (~mattch@pcw3047.see.ed.ac.uk) Quit (Quit: Leaving.)
[11:48] * mattt_ (~textual@94.236.7.190) has joined #ceph
[11:48] * piezo (~piezo@108-88-37-13.lightspeed.iplsin.sbcglobal.net) Quit (Read error: Operation timed out)
[11:48] * alexbligh (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) Quit (Quit: I shouldn't really be here - dircproxy 1.0.5)
[11:50] * mattt (~textual@94.236.7.190) Quit (Ping timeout: 480 seconds)
[11:50] * mattt_ is now known as mattt
[11:52] * jcsp (~Adium@0001bf3a.user.oftc.net) has joined #ceph
[11:57] <saaby> anyone knows how to actually list which rados snapshots have been created on a pool?
[11:59] * alexbligh1 (~alexbligh@89-16-176-215.no-reverse-dns-set.bytemark.co.uk) has joined #ceph
[12:05] <andreask> saaby: rados -p your_pool lssnap
[12:05] <andreask> ... if you mean pool-snapshots
[12:05] <saaby> andreask: thanks, yep, that was what I meant. :)
[12:06] * diegows (~diegows@190.190.5.238) has joined #ceph
[12:06] * garphy`aw is now known as garphy
[12:14] * fdmanana (~fdmanana@bl10-140-160.dsl.telepac.pt) has joined #ceph
[12:18] * sarob (~sarob@2601:9:7080:13a:90b9:ea14:d495:fea8) has joined #ceph
[12:20] * thb (~me@82.113.106.189) has joined #ceph
[12:21] * thb is now known as Guest3623
[12:21] * mattt (~textual@94.236.7.190) Quit (Quit: Textual IRC Client: www.textualapp.com)
[12:22] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) Quit (Quit: Leaving.)
[12:26] * sarob (~sarob@2601:9:7080:13a:90b9:ea14:d495:fea8) Quit (Ping timeout: 480 seconds)
[12:29] * mattt (~textual@94.236.7.190) has joined #ceph
[12:33] <rakshith> A quick question.. how much of an effort does it take to make the OSD interact with the custom File system underneath thats already working with a RAID group
[12:33] <rakshith> ?
[12:34] * joao (~joao@a95-92-33-54.cpe.netcabo.pt) has joined #ceph
[12:34] * ChanServ sets mode +o joao
[12:36] * BillK (~BillK-OFT@58-7-141-92.dyn.iinet.net.au) Quit (Ping timeout: 480 seconds)
[12:37] * glzhao (~glzhao@220.181.11.232) Quit (Quit: leaving)
[12:40] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[12:46] * glambert (~glambert@37.157.50.80) has joined #ceph
[12:46] * nuved (~novid@81.31.238.20) has joined #ceph
[12:51] * mattch (~mattch@pcw3047.see.ed.ac.uk) has joined #ceph
[12:53] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) has joined #ceph
[12:55] * rakshith (~rakshith@202.3.120.5) has left #ceph
[12:56] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Ping timeout: 480 seconds)
[12:57] * Guest3623 (~me@82.113.106.189) Quit (Ping timeout: 480 seconds)
[12:58] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[13:02] * madkiss (~madkiss@80.74.101.226) Quit (Quit: Leaving.)
[13:18] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[13:25] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[13:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[13:26] * mattt (~textual@94.236.7.190) Quit (Read error: Operation timed out)
[13:26] * The_Bishop (~bishop@f050144099.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[13:27] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[13:28] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) Quit (Read error: Operation timed out)
[13:29] * mattt (~textual@94.236.7.190) has joined #ceph
[13:31] * thb (~me@82.113.121.205) has joined #ceph
[13:31] * thb is now known as Guest3631
[13:32] * t0rn (~ssullivan@2607:fad0:32:a02:d227:88ff:fe02:9896) has joined #ceph
[13:35] * thomnico (~thomnico@2a01:e35:8b41:120:949c:a798:7300:a4ef) Quit (Quit: Ex-Chat)
[13:38] * The_Bishop (~bishop@f050144099.adsl.alicedsl.de) has joined #ceph
[13:39] * thomnico (~thomnico@2a01:e35:8b41:120:949c:a798:7300:a4ef) has joined #ceph
[13:42] * jeff-YF (~jeffyf@pool-173-66-76-78.washdc.fios.verizon.net) has joined #ceph
[13:43] <- *fghaas* ?OTRv23?
[13:43] <- *fghaas* fghaas@irc.oftc.net has requested an Off-the-Record private conversation <http://otr.cypherpunks.ca/>. However, you do not have a plugin to support that.
[13:43] <- *fghaas* See http://otr.cypherpunks.ca/ for more information.
[13:45] * b0e1 (~aledermue@juniper1.netways.de) has joined #ceph
[13:49] * mattt_ (~textual@94.236.7.190) has joined #ceph
[13:49] * mattt (~textual@94.236.7.190) Quit (Ping timeout: 480 seconds)
[13:49] * mattt_ is now known as mattt
[13:50] * b0e (~aledermue@juniper1.netways.de) Quit (Ping timeout: 480 seconds)
[13:50] * japuzzo (~japuzzo@ool-4570886e.dyn.optonline.net) has joined #ceph
[13:51] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[13:55] <nuved> i cant understand why my writing speed on rbd is very slow :(
[13:55] <nuved> (1.1 GB) copied, 510.196 s, 2.1 MB/s
[13:55] <nuved> for writing 1.1 GB, 510 second :/
[13:56] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[13:57] <nuved> on xfs was better than now on btrfs with more osd
[14:01] * arye (~arye@c-76-116-131-5.hsd1.de.comcast.net) has joined #ceph
[14:01] * arye (~arye@c-76-116-131-5.hsd1.de.comcast.net) Quit ()
[14:03] * jeff-YF (~jeffyf@pool-173-66-76-78.washdc.fios.verizon.net) Quit (Quit: jeff-YF)
[14:03] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[14:03] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has joined #ceph
[14:04] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[14:07] * zidarsk8 (~zidar@prevod.fri1.uni-lj.si) has left #ceph
[14:12] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[14:15] * ircuser-1 (~ircuser-1@35.222-62-69.ftth.swbr.surewest.net) has joined #ceph
[14:16] <jerker> nuved: you have activated local KVM RBD cache? Hmm. http://ceph.com/docs/master/rbd/qemu-rbd/#qemu-cache-options
[14:18] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[14:20] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[14:21] <jerker> I write via netatalk to VM running on OSD node. I get around 20-25 MB/s when a compple of clients is backing up. But then also the Ceph traffic is on the same gigabit, I run quite old hardware. When/if I get more OSD nodes it is getting interesting again.
[14:23] * jtang1 (~jtang@80.111.79.253) Quit (Quit: Leaving.)
[14:24] * Guest3631 (~me@82.113.121.205) Quit (Remote host closed the connection)
[14:25] * jeff-YF (~jeffyf@pool-173-66-76-78.washdc.fios.verizon.net) has joined #ceph
[14:26] <nuved> jerker: on btrfs is better enable or disable?
[14:26] * markbby (~Adium@168.94.245.2) has joined #ceph
[14:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[14:29] <nuved> jerker: i mount manually rbd on /mnt for test writing speed, but with kvm, vm is not booting completely because of low speed writing too
[14:29] * markbby (~Adium@168.94.245.2) Quit (Remote host closed the connection)
[14:29] * markbby (~Adium@168.94.245.2) has joined #ceph
[14:35] * garphy is now known as garphy`aw
[14:36] * b0e1 (~aledermue@juniper1.netways.de) Quit (Quit: Leaving.)
[14:37] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[14:43] * jeff-YF_ (~jeffyf@67.23.123.228) has joined #ceph
[14:45] * thomnico (~thomnico@2a01:e35:8b41:120:949c:a798:7300:a4ef) Quit (Ping timeout: 480 seconds)
[14:48] * jeff-YF (~jeffyf@pool-173-66-76-78.washdc.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[14:48] * jeff-YF_ is now known as jeff-YF
[14:56] * jtang1 (~jtang@80.111.79.253) has joined #ceph
[14:56] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) has joined #ceph
[14:58] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:00] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) has joined #ceph
[15:01] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) has joined #ceph
[15:02] * ChrisNBlum (~Adium@dhcp-ip-230.dorf.rwth-aachen.de) has joined #ceph
[15:05] * jcsp (~Adium@0001bf3a.user.oftc.net) Quit (Ping timeout: 480 seconds)
[15:06] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:09] * lianghaoshen (~slhhust@113.240.125.152) has joined #ceph
[15:09] * thb (~me@2a02:2028:10d:49d0:6267:20ff:fec9:4e40) has joined #ceph
[15:10] * kwmiebach (sid16855@charlton.irccloud.com) Quit (Read error: Connection reset by peer)
[15:10] * kwmiebach (sid16855@id-16855.charlton.irccloud.com) has joined #ceph
[15:10] * lianghaoshen (~slhhust@113.240.125.152) Quit ()
[15:12] * lofejndif (~lsqavnbok@tor-exit3-readme.dfri.se) has joined #ceph
[15:14] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) Quit (Quit: Ex-Chat)
[15:14] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) has joined #ceph
[15:18] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[15:18] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[15:18] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) has joined #ceph
[15:19] * dlan^ (~dennis@183.194.225.202) has joined #ceph
[15:22] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) has joined #ceph
[15:25] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:26] * markbby (~Adium@168.94.245.2) has joined #ceph
[15:26] * sarob (~sarob@c-50-161-65-119.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:30] * raso (~raso@deb-multimedia.org) Quit (Quit: WeeChat 0.4.3)
[15:31] * raso (~raso@deb-multimedia.org) has joined #ceph
[15:33] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[15:34] * bandrus (~Adium@75.5.250.197) has joined #ceph
[15:34] * ismell (~ismell@host-64-17-89-79.beyondbb.com) has joined #ceph
[15:37] * ismell (~ismell@host-64-17-89-79.beyondbb.com) has left #ceph
[15:38] * jtang1 (~jtang@80.111.79.253) Quit (Quit: Leaving.)
[15:39] * Cataglottism (~Cataglott@dsl-087-195-030-184.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[15:40] * jtang1 (~jtang@80.111.79.253) has joined #ceph
[15:41] * JCL (~JCL@2601:9:5980:39b:bc8f:59a7:1df3:4b3d) has joined #ceph
[15:43] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) has joined #ceph
[15:43] * The_Bishop (~bishop@f050144099.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[15:44] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[15:48] * neurodrone (~neurodron@static-108-30-171-7.nycmny.fios.verizon.net) Quit (Quit: neurodrone)
[15:51] * sroy (~sroy@207.96.182.162) has joined #ceph
[15:54] * gregsfortytwo1 (~Adium@cpe-172-250-69-138.socal.res.rr.com) Quit (Quit: Leaving.)
[15:55] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[15:58] * wrale_ (~wrale@cpe-107-9-20-3.woh.res.rr.com) has joined #ceph
[15:59] * The_Bishop (~bishop@f050144099.adsl.alicedsl.de) has joined #ceph
[15:59] * jtang1 (~jtang@80.111.79.253) Quit (Quit: Leaving.)
[16:03] * dlan^ (~dennis@183.194.225.202) Quit (Quit: leaving)
[16:04] * jtang1 (~jtang@80.111.79.253) has joined #ceph
[16:05] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[16:05] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:06] <isodude> Hi, just a question about CPU governor, are there any problems with running ondemand as governor on OSD or MONs?
[16:07] <isodude> Or is the recommendation to use performance?
[16:07] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[16:08] * xarses (~andreww@c-24-23-183-44.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:16] * linuxkidd (~linuxkidd@dhcp-64-102-35-170.cisco.com) has joined #ceph
[16:18] * sarob (~sarob@2601:9:7080:13a:d5e2:c0e1:d8f4:3ac1) has joined #ceph
[16:21] * Meths (~meths@2.25.189.44) Quit (Ping timeout: 480 seconds)
[16:26] * sarob (~sarob@2601:9:7080:13a:d5e2:c0e1:d8f4:3ac1) Quit (Ping timeout: 480 seconds)
[16:29] * erwyn (~erwyn@markelous.net) has left #ceph
[16:31] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[16:34] * wusui (~Warren@2607:f298:a:607:c568:4757:5bf1:b42f) Quit (Quit: Leaving)
[16:34] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Read error: Connection reset by peer)
[16:34] * warrenSusui (~Warren@2607:f298:a:607:c568:4757:5bf1:b42f) Quit (Quit: Leaving)
[16:34] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[16:34] * ChanServ sets mode +v andreask
[16:34] * WarrenUsui (~Warren@2607:f298:a:607:c568:4757:5bf1:b42f) Quit (Quit: Leaving)
[16:36] * theonceandfuturewarrenusui (~Warren@2607:f298:a:607:c568:4757:5bf1:b42f) Quit (Quit: Leaving)
[16:36] * bandrus (~Adium@75.5.250.197) Quit (Quit: Leaving.)
[16:41] <jerker> nuved: it is faster for the client using the RBD to enble the cache. Without cache my VM was very slow during io load.
[16:41] * alfredodeza (~alfredode@198.206.133.89) has left #ceph
[16:43] * Meths (~meths@2.25.213.161) has joined #ceph
[16:45] * marrusl (~mark@209-150-43-182.c3-0.wsd-ubr2.qens-wsd.ny.cable.rcn.com) has joined #ceph
[16:48] <nuved> jerker: thx man, now i re-deploy ceph cluster with new config from beginning , i will add "rbd cache = true" line to my new config in client section
[16:51] <jerker> nuved: If you run KVM/QEMU you can just add the argument on the command line.
[16:52] <jerker> nuved: I just copy-and-pasted, but he command line argument for my system disk look like this for QEMU/KVM: "-drive format=raw,file=rbd:data/${NAME}-root,id=drive1,if=none,cache=writeback -device driver=ide-drive,drive=drive1,discard_granularity=512"
[16:53] <jerker> (Replace ${NAME} with the name of the RBD)
[16:54] * ircolle (~Adium@2601:1:8380:2d9:dd2b:96c:cc8c:aaed) has joined #ceph
[16:55] * xarses (~andreww@12.164.168.117) has joined #ceph
[16:56] <nuved> jerker: thx man for sharing your config :)
[16:56] <jerker> I get 96.8 MB/s when reading linearly from /dev/sda and 66 MB/s when writing and calling sync.
[16:58] * ircolle1 (~Adium@c-67-172-132-222.hsd1.co.comcast.net) has joined #ceph
[16:58] <nuved> jerker: do you separate journaling path from osd device?
[16:59] * reed (~reed@75-101-54-131.dsl.static.sonic.net) has joined #ceph
[16:59] <jerker> nuved: Yes, i Run with 10 GB SSD journal on seperate partition on the two OSD-nodes. Two of the total four disks have integrated 8 GB flash cache too. So I guess some of the burst just hit the SSD cache.
[17:00] <jerker> nuved: I just have two OSD nodes.
[17:00] * ircolle1 (~Adium@c-67-172-132-222.hsd1.co.comcast.net) Quit ()
[17:01] * geekmush (~Adium@cpe-66-68-198-33.rgv.res.rr.com) Quit (Quit: Leaving.)
[17:01] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[17:01] * hasues (~hasues@kwfw01.scrippsnetworksinteractive.com) has joined #ceph
[17:01] * jeremyh28 (~fire@27.106.31.42) has joined #ceph
[17:02] * geekmush (~Adium@cpe-66-68-198-33.rgv.res.rr.com) has joined #ceph
[17:02] * terje__ (~joey@174-16-124-95.hlrn.qwest.net) Quit (Read error: Operation timed out)
[17:02] <nuved> jerker: even using btrfs, we need ssd? now i do not have access to ssd hard :/
[17:02] <jerker> On friday I discovered a new potential bug actually. The partition table of both my system drives are gone. .... I do not know if "ceph-deploy zap" did it or what have happened. It was not what I expected. But I have not figured out exactly when it happened. :) Will try to run "ceph-deploy zap" again to se what happens.
[17:02] * ircolle1 (~Adium@2601:1:8380:2d9:2dd5:f778:3a91:3c73) has joined #ceph
[17:02] * theonceandfuturewarrenusui (~Warren@2607:f298:a:607:c568:4757:5bf1:b42f) has joined #ceph
[17:03] * theonceandfuturewarrenusui (~Warren@2607:f298:a:607:c568:4757:5bf1:b42f) Quit ()
[17:04] <jerker> nuved: I ran ZFS first with Ext4 on top and it was dead slow and unstable. So I run XFS now with SSD journal. My guss is that one do not need an journal for btrfs, but then one do not get the performance boost by letting some writes in smaller bursts just hit the journals. (Not sure if Ceph works that way actually, I just assume.)
[17:05] * ircolle (~Adium@2601:1:8380:2d9:dd2b:96c:cc8c:aaed) Quit (Ping timeout: 480 seconds)
[17:06] * terje__ (~joey@75-171-225-63.hlrn.qwest.net) has joined #ceph
[17:07] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Ping timeout: 480 seconds)
[17:07] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[17:07] * joef1 (~Adium@2620:79:0:131:e0a8:77b7:bebf:e5a6) Quit (Quit: Leaving.)
[17:08] * rmoe (~quassel@173-228-89-134.dsl.static.sonic.net) Quit (Ping timeout: 480 seconds)
[17:08] * joef (~Adium@2620:79:0:131:41c2:3bfc:9e89:ed2b) has joined #ceph
[17:09] * mattt (~textual@94.236.7.190) Quit (Ping timeout: 480 seconds)
[17:11] * jyluke (~oftc-webi@118.100.4.146) has joined #ceph
[17:12] * ircolle1 (~Adium@2601:1:8380:2d9:2dd5:f778:3a91:3c73) Quit (Quit: Leaving.)
[17:12] <jyluke> hello, just wondering had anyone here faced issue with mds daemon flapping?
[17:13] <nuved> jerker: thx for your help, first i prefer test with btrfs, for that i upgrade all os to ubuntu 14.04 beta ;) if i cant reach suitable then test with xfs too
[17:14] <nuved> *suitable speed
[17:15] * Meistarin (sid19523@0001c3c8.user.oftc.net) Quit ()
[17:15] * analbeard (~shw@141.0.32.124) Quit (Quit: Leaving.)
[17:15] * bandrus (~Adium@66-87-134-117.pools.spcsdns.net) has joined #ceph
[17:16] * Meistarin (~cunt@0001c3c8.user.oftc.net) has joined #ceph
[17:16] <acaos_> so, I have a small question about the object namespaces, if anyone here is familiar with them
[17:16] * acaos_ is now known as acaos
[17:17] * gdavis33 (~gdavis@38.122.12.254) has joined #ceph
[17:17] * gdavis33 (~gdavis@38.122.12.254) has left #ceph
[17:18] * sarob (~sarob@2601:9:7080:13a:147c:84ad:6a3a:66a5) has joined #ceph
[17:19] * madkiss (~madkiss@bzq-218-11-179.cablep.bezeqint.net) has joined #ceph
[17:19] * kiwnix (~kiwnix@00011f91.user.oftc.net) Quit (Quit: Leaving)
[17:21] * bandrus (~Adium@66-87-134-117.pools.spcsdns.net) Quit (Quit: Leaving.)
[17:23] * xarses_ (~andreww@12.164.168.117) has joined #ceph
[17:25] * joef (~Adium@2620:79:0:131:41c2:3bfc:9e89:ed2b) Quit (Quit: Leaving.)
[17:25] * steki (~steki@91.195.39.5) Quit (Ping timeout: 480 seconds)
[17:26] * sarob (~sarob@2601:9:7080:13a:147c:84ad:6a3a:66a5) Quit (Ping timeout: 480 seconds)
[17:27] * madkiss (~madkiss@bzq-218-11-179.cablep.bezeqint.net) Quit (Quit: Leaving.)
[17:30] * rmoe (~quassel@12.164.168.117) has joined #ceph
[17:30] * ircolle (~Adium@2601:1:8380:2d9:409e:ba25:3325:cf38) has joined #ceph
[17:30] * joef (~Adium@2620:79:0:131:41c2:3bfc:9e89:ed2b) has joined #ceph
[17:31] * madkiss (~madkiss@bzq-218-11-179.cablep.bezeqint.net) has joined #ceph
[17:31] * wusui (~Warren@2607:f298:a:607:c568:4757:5bf1:b42f) has joined #ceph
[17:33] * Cube (~Cube@66-87-64-159.pools.spcsdns.net) has joined #ceph
[17:34] * lluis (~oftc-webi@pat.hitachigst.com) has joined #ceph
[17:35] * nwat (~textual@eduroam-230-52.ucsc.edu) has joined #ceph
[17:35] <lluis> is there any command to disable the "self-healing" mechanism, and manually trigger repairs to evaluate the repair time?
[17:37] <sage> lluis: ceph osd set noout to do it temporarily
[17:37] <sage> or, you can set 'mon osd down out interval = 0' in teh config to disable it
[17:37] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[17:38] <lluis> sage, thanks, but I think that this will prevent new repairs to happen, since the node is not marked as 'out'
[17:38] * markbby (~Adium@168.94.245.2) has joined #ceph
[17:38] * gdavis33 (~gdavis@38.122.12.254) has joined #ceph
[17:38] * gdavis33 (~gdavis@38.122.12.254) has left #ceph
[17:38] <lluis> I want a fair way to measure the repair time of a replicated PG vs a Erasured PG
[17:39] <sage> usually the thing that triggers 'repair' is that the down osd is marked out and a backfill/recovery happens..
[17:40] * wrale_ (~wrale@cpe-107-9-20-3.woh.res.rr.com) Quit (Quit: Leaving...)
[17:40] <sage> if you want to compare the two, i would suggest stopping an osd and marking it out as the 'start', and wait until it is active+clean again for the 'end' time
[17:40] * sroy (~sroy@207.96.182.162) Quit (Quit: Quitte)
[17:41] * dpippenger (~riven@66-192-9-78.static.twtelecom.net) has joined #ceph
[17:41] <lluis> sage: Can I mark an OSD out, or should I wait until it is automatically discovered?
[17:43] <lluis> "ceph osd out {osd-num}" will do the trick ?
[17:44] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) Quit (Ping timeout: 480 seconds)
[17:45] <sage> lluis: yeah, that's the way to do it ('ceph osd out N')
[17:46] <lluis> sage: thanks!
[17:47] * thomnico (~thomnico@cro38-2-88-180-16-18.fbx.proxad.net) Quit (Quit: Ex-Chat)
[17:47] * sarob (~sarob@2601:9:7080:13a:2812:1d6f:2f5a:f10e) has joined #ceph
[17:48] * xmltok (~xmltok@216.103.134.250) has joined #ceph
[17:49] * zerick (~eocrospom@190.187.21.53) has joined #ceph
[17:49] * xmltok (~xmltok@216.103.134.250) Quit ()
[17:49] <jyluke> hi, i am having issue with mds respawning contantly and the mds log keep showing something like: 2014-03-16 18:49:17.358681 7f4c2b5e1700 -1 mds/journal.cc: In function 'void EMetaBlob::replay(MDS*, LogSegment*, MDSlaveUpdate*)' thread 7f4c2b5e1700 time 2014-03-16 18:49:17.356336 mds/journal.cc: 1316: FAILED assert(i == used_preallocated_ino)
[17:52] <jyluke> quit
[17:53] * xmltok (~xmltok@216.103.134.250) has joined #ceph
[17:54] * jyluke (~oftc-webi@118.100.4.146) Quit (Remote host closed the connection)
[17:55] * sarob (~sarob@2601:9:7080:13a:2812:1d6f:2f5a:f10e) Quit (Ping timeout: 480 seconds)
[17:58] * bandrus (~Adium@adsl-75-5-250-197.dsl.scrm01.sbcglobal.net) has joined #ceph
[18:02] * jedanbik (~jedanbik@dhcp152-54-6-141.wireless.europa.renci.org) has joined #ceph
[18:02] <lluis> Is there any way to remove the default pools: "rbd data metadata" ?
[18:05] <fghaas> lluis: yes, rados rmpool, but I don't think you'll want to do that :)
[18:06] <lluis> fghaas: thanks, I was deleteing trough ceph and it didn't let me
[18:06] <lluis> fghaas: Yes I want to do that, only for debugginf/evaluation purposes
[18:06] <jtang1> is anyone here using LVS for load balancing radosgw by any chance?
[18:06] <jtang1> im kinda curious if everyone is just using haproxy cause its easy to setup?
[18:06] <dmsimard> fghaas: Just wanted to say I stumbled on your linux conf Australia 2013 slides and that I liked them :)
[18:07] <fghaas> dmsimard: oh thank you, yes that's one of them slide decks that apparently still has people talking :)
[18:08] <dmsimard> fghaas: I'm actually doing a talk tonight, it also uses impress.js :)
[18:08] <fghaas> I've since moved away from that
[18:08] <fghaas> I use reveal.js now... still looks awesome, but it takes less pain to get there :)
[18:09] <dmsimard> fghaas: does look nice, maybe I'll try it some other time
[18:09] * diegows (~diegows@190.190.5.238) Quit (Ping timeout: 480 seconds)
[18:09] * TMM (~hp@sams-office-nat.tomtomgroup.com) Quit (Quit: Ex-Chat)
[18:10] <fghaas> I hate the default theme, I think it looks clunky and depressing, but there's some neat stuff you can build with it
[18:11] <fghaas> http://www.hastexo.com/openstackisraeldec2013 for example ??? not Ceph related, but you see the point
[18:12] * lofejndif (~lsqavnbok@1RHAACTVN.tor-irc.dnsbl.oftc.net) Quit (Quit: gone)
[18:12] * i_m (~ivan.miro@gbibp9ph1--blueice2n1.emea.ibm.com) Quit (Quit: Leaving.)
[18:14] * jyluke (~oftc-webi@118.100.4.146) has joined #ceph
[18:14] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Read error: Connection reset by peer)
[18:16] * topro (~prousa@host-62-245-142-50.customer.m-online.net) has joined #ceph
[18:16] * sjustwork (~sam@2607:f298:a:607:fcf3:cadd:72c3:7932) has joined #ceph
[18:18] * sroy (~sroy@207.96.182.162) has joined #ceph
[18:20] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) has joined #ceph
[18:20] * i_m (~ivan.miro@gbibp9ph1--blueice4n1.emea.ibm.com) Quit ()
[18:24] <dmsimard> fghaas: It's "snappier" than impress, I like that
[18:28] * piezo (~piezo@108-88-37-13.lightspeed.iplsin.sbcglobal.net) has joined #ceph
[18:30] * sjm (~sjm@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[18:30] <jyluke> quit
[18:30] <jyluke> quit
[18:30] <jyluke> exit
[18:30] <jyluke> ?
[18:30] * jyluke (~oftc-webi@118.100.4.146) has left #ceph
[18:32] * jeremyh28 (~fire@27.106.31.42) Quit ()
[18:33] * jtang1 (~jtang@80.111.79.253) Quit (Quit: Leaving.)
[18:34] * angdraug (~angdraug@12.164.168.117) has joined #ceph
[18:35] * hybrid512 (~walid@195.200.167.70) Quit (Quit: Leaving.)
[18:37] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[18:38] * hybrid512 (~walid@195.200.167.70) Quit ()
[18:38] * hybrid512 (~walid@195.200.167.70) has joined #ceph
[18:42] * sputnik13 (~sputnik13@207.8.121.241) has joined #ceph
[18:49] * bandrus (~Adium@adsl-75-5-250-197.dsl.scrm01.sbcglobal.net) Quit (Quit: Leaving.)
[18:49] * dalgaaf (uid15138@id-15138.ealing.irccloud.com) Quit (Quit: Connection closed for inactivity)
[18:50] * princeholla (~princehol@p5DE95112.dip0.t-ipconnect.de) has joined #ceph
[18:52] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) has joined #ceph
[18:54] * bandrus (~Adium@adsl-75-5-250-197.dsl.scrm01.sbcglobal.net) has joined #ceph
[18:56] * nwat (~textual@eduroam-230-52.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[19:00] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[19:01] * madkiss (~madkiss@bzq-218-11-179.cablep.bezeqint.net) Quit (Read error: Connection reset by peer)
[19:01] * jtang1 (~jtang@80.111.79.253) has joined #ceph
[19:03] * jtang2 (~jtang@80.111.79.253) has joined #ceph
[19:03] * jtang1 (~jtang@80.111.79.253) Quit (Read error: Connection reset by peer)
[19:04] * madkiss (~madkiss@bzq-218-11-179.cablep.bezeqint.net) has joined #ceph
[19:04] <dwm> Is it a reasonable presumption that Ceph instances will be running on an OS with udev, or a functional equivilent?
[19:06] * markbby (~Adium@168.94.245.2) has joined #ceph
[19:06] <dwm> (I am considering a minor refactoring of the ceph OSD discovery code, so that management processes can query the udev DB rather than try to scrape ceph-disk output.)
[19:08] <dmick> Someone else saw it recently too:
[19:08] <dmick> http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-March/000416.html
[19:08] <dmick> ooops, sorry, that was for jyluke, was scrolled and didn't notice
[19:09] <dwm> dmick: I was able to figure out from context. :-)
[19:10] <dmick> dwm: there is always talk of porting to more-than-Linux, but many installations use udev for startup now; hard to say I guess
[19:12] * jtang2 (~jtang@80.111.79.253) Quit (Ping timeout: 480 seconds)
[19:12] * topro (~prousa@host-62-245-142-50.customer.m-online.net) Quit (Quit: Konversation terminated!)
[19:14] * theonceandfuturewarrenusui (~Warren@2607:f298:a:607:b8db:c806:85f2:3ae7) has joined #ceph
[19:14] * theonceandfuturewarrenusui (~Warren@2607:f298:a:607:b8db:c806:85f2:3ae7) Quit ()
[19:16] * theonceandfuturewarrenusui (~Warren@2607:f298:a:607:b8db:c806:85f2:3ae7) has joined #ceph
[19:16] <dwm> dmick: *nods* That was my thinking, too.
[19:20] * elyograg (~oftc-webi@client175.mainstreamdata.com) has joined #ceph
[19:21] * wusui (~Warren@2607:f298:a:607:c568:4757:5bf1:b42f) Quit (Ping timeout: 480 seconds)
[19:22] <elyograg> over the weekend, sage was kind enough to respond to some questions I had. The docs say that cephfs isn't ready for production, sage said he thought that would change later this year. can anyone share additional thoughts about what needs to change for it to be production ready, and whether I should entrust my data to it over the next couple of months?
[19:23] * jedanbik (~jedanbik@dhcp152-54-6-141.wireless.europa.renci.org) has left #ceph
[19:23] <dwm> elyograg: The definition of a trusted system is one that, if it fails, will violate policy.
[19:24] <dwm> elyograg: Current standing advice is not to use CephFS in a place where policy violation would be bad.
[19:24] * rotbeard (~redbeard@b2b-94-79-138-170.unitymedia.biz) Quit (Quit: Leaving)
[19:25] <dwm> elyograg: You might independently run some experiments for the use-cases you care about, and based on those experiments, determine that it's a reasonable thing to do in practice.
[19:25] <dwm> elyograg: But at the moment, if you're looking for someone to say, "Oh, it's really fine, go ahead", you're setting yourself up for some upset!
[19:26] <elyograg> that was what I did with the previous choice we made. Had no problems despite testing a number of possible scenarios. ran into problems that never happened in testing.
[19:26] * TMM (~hp@c97185.upc-c.chello.nl) has joined #ceph
[19:27] <dwm> "Testing can show the presence of bugs, but not their absence." -- Dijkstra.
[19:27] <dwm> (I might be slightly misremembering the quote.)
[19:28] <fghaas> elyograg: don't put a million files in a directory. do use MDSs in active/standby mode. don't use directory tree partitioning. don't re-export your CephFS via the in-kernel NFS server.
[19:29] <elyograg> fghaas: i do need NFS. how would I do that?
[19:29] <fghaas> look at ganesha
[19:29] <dwm> Mm, that reminds me, I tried re-exporting CephFS using the kernel NFS facility, and reliably got -ESTALE unless I exported the root.
[19:29] <dwm> (That is, attempting to re-export a subtree reliably failed.)
[19:33] * nuved (~novid@81.31.238.20) Quit (Ping timeout: 480 seconds)
[19:34] <elyograg> with ganesha, would I have it use the ceph FUSE, or just export a mounted native ceph?
[19:35] <elyograg> just read a little about it. first exposure.
[19:35] <dwm> elyograg: Neither; it links directly against libcephfs and can speak the RADOS protocol directly without kernel assistance.
[19:35] <elyograg> ah, that's really nice.
[19:36] <dwm> (I believe, I haven't played with it myself just yet beyond discovering there isn't a nice .deb for it yet.)
[19:37] * princeholla (~princehol@p5DE95112.dip0.t-ipconnect.de) Quit (Remote host closed the connection)
[19:40] * sprachgenerator (~sprachgen@130.202.135.223) has joined #ceph
[19:40] * i_m (~ivan.miro@nat-5-carp.hcn-strela.ru) has joined #ceph
[19:40] * geekmush (~Adium@cpe-66-68-198-33.rgv.res.rr.com) Quit (Quit: Leaving.)
[19:41] * bitblt (~don@rtp-isp-nat1.cisco.com) has joined #ceph
[19:41] * geekmush (~Adium@cpe-66-68-198-33.rgv.res.rr.com) has joined #ceph
[19:42] * sjm (~Adium@pool-108-53-56-179.nwrknj.fios.verizon.net) Quit (Quit: Leaving.)
[19:42] * Boltsky (~textual@cpe-198-72-138-106.socal.res.rr.com) Quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz???)
[19:44] * xarses (~andreww@12.164.168.117) Quit (Remote host closed the connection)
[19:56] <elyograg> fghaas: Thanks for the info. I think I could likely live with all those. Until I look it up, I don't even know what directory tree partitioning means. We'd probably have less than 10000 files max in a directory, with 1000 or 2000 being more likely. Ganesha would probably fill our needs for NFS. Ultimately I'd hope to move to the native client across the board, but that's not possible in the short term.
[19:58] * jtang1 (~jtang@80.111.79.253) has joined #ceph
[19:59] * Boltsky (~textual@office.deviantart.net) has joined #ceph
[20:00] * diegows (~diegows@200.16.99.223) has joined #ceph
[20:03] * jtang1 (~jtang@80.111.79.253) Quit (Read error: Operation timed out)
[20:04] * fdmanana (~fdmanana@bl10-140-160.dsl.telepac.pt) Quit (Quit: Leaving)
[20:11] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[20:13] * sjm (~Adium@mad5736d0.tmodns.net) has joined #ceph
[20:15] * sarob (~sarob@nat-dip4.cfw-a-gci.corp.yahoo.com) has joined #ceph
[20:17] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) has joined #ceph
[20:19] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[20:22] <elyograg> moving beyond cephfs ... if we wanted to use the object store, perhaps with an S3 front end, does that scale to huge numbers of files? We have hundreds of millions, though as I said only about 10000 max per directory. If business goes really well, it would be a few billion.
[20:23] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) Quit (Ping timeout: 480 seconds)
[20:26] <elyograg> For an object store, we probably would only really need to store the main asset file, of which we have about 90 million. Snapshots or a version history would be incredibly valuable.
[20:27] <elyograg> for video content, there are actually a few assets per document, but by far most of our assets are not video.
[20:28] <elyograg> they are mostly JPEGs, but there is a sizable number of text news articles.
[20:30] <elyograg> we don't have software ready for an object store, though ... so we need to have a scalable filesystem as an interim solution. The SAN architecture we started with won't scale for us, partly due to Oracle buying Sun and putting Solaris X86 outside our budget.
[20:30] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[20:32] <elyograg> I'd prefer to have a scalable filesystem as the final solution, not an object store. It's simply too useful to be able to poke around the data without special tools.
[20:32] <fghaas> thinking out loud here, I wonder if anyone in your shoes has ever considered running s3fs on top of radosgw, i.e. migrate the back end to an object store first, and then the application
[20:32] * sroy (~sroy@207.96.182.162) Quit (Quit: Quitte)
[20:32] <fghaas> don't take that as a suggestion though, just food for thought
[20:33] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit ()
[20:34] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[20:35] * sjm (~Adium@mad5736d0.tmodns.net) Quit (Read error: Connection reset by peer)
[20:35] * markbby (~Adium@168.94.245.2) has joined #ceph
[20:35] <elyograg> if an object store will scale to allow us to have a few hundred million assets taking up a few hundred terabytes with replication and versioning, we'd take it. :)
[20:35] <elyograg> we already have over 200TB of data with our 90 million assets.
[20:36] * sjm (~Adium@mad5736d0.tmodns.net) has joined #ceph
[20:37] <fghaas> chances are an object store will scale to that size much better than any file system... there's a reason that s3 is not a filesystem, and neither is swift
[20:37] <fghaas> rackspace cloud files is up to 100 petabytes at this point and no one is missing a file system :)
[20:40] <sage> loicd: leseb: what's the status/plan for setting up the ceph-brag server?
[20:40] <dwm> FYI, I am currently implementing a ceph-osd control module for Ansible. I hope to have something largely feature-complete and ready for sharing later this week.
[20:40] <sage> should we just stick it on the ceph.com machine?
[20:41] <dwm> So that one can assert expressions like, "ceph-osd disk=/dev/sda state={present,absent,in,out,???}
[20:41] <dwm> (The current playbooks wrap ceph-deploy, which currently lack e.g. OSD destruction capabilities.)
[20:45] * markbby (~Adium@168.94.245.2) Quit (Quit: Leaving.)
[20:46] * nwat (~textual@eduroam-230-52.ucsc.edu) has joined #ceph
[20:52] * jtang1 (~jtang@80.111.79.253) has joined #ceph
[20:54] <elyograg> looking over the documentation for the object storage at a high level, it seems that there is not yet versioning support in either object API, and it doesn't mention snapshots. We could build versioning into our own API layer, though.
[20:56] <elyograg> a nice side effect of using ceph for this is that we could ultimately transition the non-asset filesystem data from SAN to CephFS and we would not need to build another storage back end.
[20:58] <elyograg> we would need to find an interim scalable filesystem solution, though. Gluster is giving us fits. Info on moosefs is less easily found, but the info I *have* found suggests that it is quite stable.
[21:00] * jtang1 (~jtang@80.111.79.253) Quit (Ping timeout: 480 seconds)
[21:03] <loicd> sage: I have setup a machine at brag.ceph.com, confirmed that leseb has root access and he presumably installed ceph-brag there (that was a few weeks ago)
[21:08] * diegows (~diegows@200.16.99.223) Quit (Ping timeout: 480 seconds)
[21:09] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) has joined #ceph
[21:12] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[21:14] <dwm> elyograg: I don't know your use case, but RBD is stable, and shared-block-device filesystems like OCFS2 and GFS2 exist. Might at least be worth looking at if you haven't evaluated already.
[21:15] * TESTR (~TESTR@dslb-092-072-189-096.pools.arcor-ip.net) has joined #ceph
[21:15] <TESTR> hello
[21:15] <TESTR> mon.CEPHNODE03 (rank 2) addr 192.168.112.223:6800/0 is down (out of quorum)
[21:15] <TESTR> how to get him up again? :>
[21:16] <TESTR> sudo start/stop ceph-mon-all / service ceph -a restart - none helps
[21:16] <TESTR> restart node - not helping
[21:20] <darkfader> elyograg: moosefs is _really_ stable. but it won't handle your numbers of objects
[21:21] <darkfader> if you look at the faq you see some evasive answers on large fs objects on the lines "if you need that, you're doing it wrong"
[21:22] <elyograg> I had not gotten far enough in my investigation. That's disappointing.
[21:23] <darkfader> please look at the faq a little and see if you come to the same conclusion
[21:25] * jksM (~jks@3e6b5724.rev.stofanet.dk) Quit (Read error: Connection reset by peer)
[21:25] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[21:25] * KevinPerks (~Adium@cpe-066-026-252-218.triad.res.rr.com) has joined #ceph
[21:25] * sjm (~Adium@mad5736d0.tmodns.net) Quit (Quit: Leaving.)
[21:26] * jcsp1 (~Adium@82-71-55-202.dsl.in-addr.zen.co.uk) Quit (Remote host closed the connection)
[21:27] <TESTR> anybody?
[21:31] * nwat (~textual@eduroam-230-52.ucsc.edu) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[21:33] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has left #ceph
[21:34] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[21:34] * Haksoldier (~islamatta@88.234.39.182) has joined #ceph
[21:35] <Haksoldier> EUZUBILLAHIMINE??EYTANIRRACIM BISMILLAHIRRAHMANIRRAHIM
[21:35] <Haksoldier> ALLAHU EKBERRRRR! LA ?LAHE ?LLALLAH MUHAMMEDEN RESULULLAH!
[21:35] <Haksoldier> I did the obligatory prayers five times a day to the nation. And I promised myself that, who (beside me) taking care not to make the five daily prayers comes ahead of time, I'll put it to heaven. Who says prayer does not show attention to me I do not have a word for it.! Prophet Muhammad (s.a.v.)
[21:35] <Haksoldier> hell if you did until the needle tip could not remove your head from prostration Prophet Muhammad pbuh
[21:35] * Haksoldier (~islamatta@88.234.39.182) has left #ceph
[21:36] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit ()
[21:37] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) has joined #ceph
[21:39] * jks (~jks@3e6b5724.rev.stofanet.dk) has joined #ceph
[21:40] <elyograg> darkfader: I'm looking at the moosefs FAQ .. which entry are you looking at that says it won't handle our numbers?
[21:40] <TESTR> how come the port has changed ..
[21:40] <TESTR> 1: 192.168.112.222:6789/0 mon.CEPHNODE02
[21:40] <TESTR> 2: 192.168.112.223:6800/0 mon.CEPHNODE03
[21:41] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) has joined #ceph
[21:43] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) Quit (Read error: Connection reset by peer)
[21:43] <darkfader> i.e. You can run a mail server on MooseFS. You won???t lose any files under a large system load. When the file system is busy, it will block until its operations complete, which will just cause the mail server to slow down.
[21:43] * MACscr (~Adium@c-98-214-103-147.hsd1.il.comcast.net) has joined #ceph
[21:43] <darkfader> there's more but i will need to find it again...
[21:44] <fghaas> well gee, that's just as good as gfs2. :)
[21:45] <darkfader> hehe
[21:45] <darkfader> i feel bad but i can't point to the other thing i saw
[21:46] <darkfader> would need sleep before i google-hunt that quote
[21:46] * jtang1 (~jtang@80.111.79.253) has joined #ceph
[21:46] * fghaas (~florian@91-119-229-245.dynamic.xdsl-line.inode.at) Quit (Quit: Leaving.)
[21:47] * linuxkidd (~linuxkidd@dhcp-64-102-35-170.cisco.com) Quit (Quit: Leaving)
[21:47] <elyograg> they talk about an environment with 500TiB, 25 million files, 2 million folders. I'd need more files and space than that, but I really hope I wouldn't need that many folders.
[21:48] * jtang1 (~jtang@80.111.79.253) Quit (Read error: Connection reset by peer)
[21:48] * jtang1 (~jtang@80.111.79.253) has joined #ceph
[21:48] <elyograg> actualy i wouln't need that much space in the interim. but I'd have more files than that.
[21:51] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) has joined #ceph
[21:53] * Sysadmin88 (~IceChat77@176.254.32.31) has joined #ceph
[21:54] * Cataglottism (~Cataglott@dsl-087-195-030-170.solcon.nl) Quit (Quit: My Mac Pro has gone to sleep. ZZZzzz???)
[21:55] <darkfader> this is the best i could find now, not what i originally had looked at:
[21:55] <darkfader> http://sourceforge.net/p/moosefs/mailman/moosefs-users/?viewmonth=201110
[21:56] <darkfader> generally, look for moosefs many small files and it gets a bit interesting
[21:56] * jtang1 (~jtang@80.111.79.253) Quit (Ping timeout: 480 seconds)
[21:56] <darkfader> there are setups >100mio files but check if they're just archive storage
[21:56] <darkfader> (besides, 100mio files is like any larger netapp - excel-file-dump)
[21:57] <darkfader> nite
[21:57] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[21:57] <elyograg> with mostly jpegs, i'm not sure they really qualify as small.
[22:03] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) Quit (Quit: Leaving.)
[22:04] * sage (~quassel@2607:f298:a:607:b498:be93:d4e9:b16) Quit (Remote host closed the connection)
[22:05] * sage (~quassel@2607:f298:a:607:20ea:13bf:5af:2725) has joined #ceph
[22:05] * ChanServ sets mode +o sage
[22:05] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[22:11] * andreask (~andreask@h081217016175.dyn.cm.kabsi.at) has joined #ceph
[22:11] * ChanServ sets mode +v andreask
[22:15] * rahat (~rmahbub@128.224.252.2) has joined #ceph
[22:15] * fatih (~fatih@78.186.36.182) has joined #ceph
[22:17] * mikedawson (~chatzilla@23-25-46-97-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[22:20] * dmsimard (~Adium@108.163.152.2) Quit (Quit: Leaving.)
[22:20] * dmsimard (~Adium@108.163.152.2) has joined #ceph
[22:22] <ponyofdeath> hi, i have an kvm server up with image on rbd/ceph. i have two monitor hostnames configured in the virsh config, but when i use iptables to block one of them it does not use the secondary. any ideas on how i can troubleshoot?
[22:22] * rotbeard (~redbeard@2a02:908:df10:6f00:76f0:6dff:fe3b:994d) Quit (Quit: Verlassend)
[22:22] * dmsimard (~Adium@108.163.152.2) Quit ()
[22:25] <ponyofdeath> here is my disk xml http://paste.ubuntu.com/7110583
[22:27] * sjm (~Adium@172.56.37.94) has joined #ceph
[22:27] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[22:35] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) has joined #ceph
[22:38] * allsystemsarego (~allsystem@188.26.167.156) Quit (Quit: Leaving)
[22:39] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[22:41] * diegows (~diegows@190.210.59.61) has joined #ceph
[22:42] * jtang1 (~jtang@80.111.79.253) has joined #ceph
[22:45] * mattt (~textual@cpc9-rdng20-2-0-cust565.15-3.cable.virginm.net) Quit (Quit: Computer has gone to sleep.)
[22:49] * diegows (~diegows@190.210.59.61) Quit (Ping timeout: 480 seconds)
[22:49] * nwat (~textual@c-50-131-197-174.hsd1.ca.comcast.net) Quit (Quit: My MacBook has gone to sleep. ZZZzzz???)
[22:50] * jtang1 (~jtang@80.111.79.253) Quit (Ping timeout: 480 seconds)
[22:53] * The_Bishop_ (~bishop@e179116243.adsl.alicedsl.de) has joined #ceph
[22:53] * sjm (~Adium@172.56.37.94) has left #ceph
[22:55] * thb (~me@0001bd58.user.oftc.net) Quit (Quit: Leaving.)
[22:57] * wrale (~wrale@wrk-28-217.cs.wright.edu) Quit (Quit: Leaving)
[23:00] * The_Bishop (~bishop@f050144099.adsl.alicedsl.de) Quit (Ping timeout: 480 seconds)
[23:01] * BManojlovic (~steki@fo-d-130.180.254.37.targo.rs) has joined #ceph
[23:02] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) has joined #ceph
[23:03] * MarkN (~nathan@142.208.70.115.static.exetel.com.au) Quit ()
[23:06] * diegows (~diegows@190.210.59.57) has joined #ceph
[23:10] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) has joined #ceph
[23:15] * mikedawson (~chatzilla@c-98-220-189-67.hsd1.in.comcast.net) has joined #ceph
[23:15] * sprachgenerator (~sprachgen@130.202.135.223) Quit (Quit: sprachgenerator)
[23:18] * thuc (~thuc@c-71-198-202-49.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[23:24] * AfC (~andrew@2407:7800:400:1011:2ad2:44ff:fe08:a4c) has joined #ceph
[23:25] * xarses_ (~andreww@12.164.168.117) Quit (Read error: Operation timed out)
[23:36] * jtang1 (~jtang@80.111.79.253) has joined #ceph
[23:39] * diegows (~diegows@190.210.59.57) Quit (Read error: Operation timed out)
[23:44] * PerlStalker (~PerlStalk@2620:d3:8000:192::70) Quit (Quit: ...)
[23:44] * ircolle (~Adium@2601:1:8380:2d9:409e:ba25:3325:cf38) Quit (Quit: Leaving.)
[23:44] * jtang1 (~jtang@80.111.79.253) Quit (Ping timeout: 480 seconds)
[23:52] * wschulze (~wschulze@cpe-72-229-37-201.nyc.res.rr.com) has joined #ceph
[23:53] <lluis> I'm interested on storing 4 replicas, placing 2 replicas in one host (two diff OSDs) and another 2 replicas in a second host (two diff OSDs). I haven't managed to get a crush rule working though and got me thinking whether if that even possible with crush. Any ideas?

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.