#ceph IRC Log

Index

IRC Log for 2016-10-08

Timestamps are in GMT/BST.

[0:04] * jermudgeon (~jermudgeo@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[0:08] * ircolle (~Adium@2601:285:201:633a:1496:70a8:f562:73f9) Quit (Quit: Leaving.)
[0:13] * jermudgeon (~jermudgeo@gw1.ttp.biz.whitestone.link) has joined #ceph
[0:15] <blizzow> Will I get speed gains if I set "rbd cache writethrough until flush = false" in /etc/ceph.conf?
[0:17] * verbalins (~Zeis@108.61.166.139) Quit ()
[0:18] * erwan_taf (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Ping timeout: 480 seconds)
[0:22] * scubacuda (sid109325@0001fbab.user.oftc.net) has joined #ceph
[0:25] * jermudgeon (~jermudgeo@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[0:26] * jermudgeon (~jermudgeo@gw1.ttp.biz.whitestone.link) has joined #ceph
[0:28] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) Quit (Quit: doppelgrau)
[0:31] * fsimonce (~simon@95.239.69.67) Quit (Remote host closed the connection)
[0:35] * braderhart (sid124863@braderhart.user.oftc.net) has joined #ceph
[0:39] <ben1> blizzow: depends if the guest does a sync
[0:40] <blizzow> ben1: so it couldn't slow anything down?
[0:41] <ben1> blizzow: it shouldn't make any difference to well-conforming guests either way
[0:41] <ben1> but if you have a bad guest that doesn't sync then it's dangerous
[0:46] * andreww (~xarses@64.124.158.3) Quit (Ping timeout: 480 seconds)
[0:52] * Racpatel (~Racpatel@2601:87:3:31e3::34db) has joined #ceph
[0:55] * andreww (~xarses@c-73-202-191-48.hsd1.ca.comcast.net) has joined #ceph
[0:56] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:2038:c77a:4100:5349) has joined #ceph
[1:00] * braderhart (sid124863@braderhart.user.oftc.net) Quit (Ping timeout: 480 seconds)
[1:01] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Quit: Nettalk6 - www.ntalk.de)
[1:04] * scubacuda (sid109325@0001fbab.user.oftc.net) Quit (Read error: Connection reset by peer)
[1:10] * Unai (~Adium@192.77.237.216) Quit (Quit: Leaving.)
[1:11] * jarrpa (~jarrpa@63.225.131.166) Quit (Ping timeout: 480 seconds)
[1:20] * davidzlap (~Adium@2605:e000:1313:8003:b11f:ca14:b4d7:9e9a) Quit (Quit: Leaving.)
[1:21] * oms101 (~oms101@2003:57:ea42:5a00:c6d9:87ff:fe43:39a1) Quit (Ping timeout: 480 seconds)
[1:26] * blizzow (~jburns@50-243-148-102-static.hfc.comcastbusiness.net) Quit (Ping timeout: 480 seconds)
[1:26] * davidzlap (~Adium@2605:e000:1313:8003:b11f:ca14:b4d7:9e9a) has joined #ceph
[1:28] * scubacuda (sid109325@0001fbab.user.oftc.net) has joined #ceph
[1:29] * sickology (~root@vpn.bcs.hr) Quit (Read error: Connection reset by peer)
[1:29] * sickology (~root@vpn.bcs.hr) has joined #ceph
[1:30] * oms101 (~oms101@p20030057EA48FD00C6D987FFFE4339A1.dip0.t-ipconnect.de) has joined #ceph
[1:33] * sickology (~root@vpn.bcs.hr) Quit (Read error: Connection reset by peer)
[1:34] * sickology (~root@vpn.bcs.hr) has joined #ceph
[1:39] * EinstCrazy (~EinstCraz@116.238.122.20) has joined #ceph
[1:43] * salwasser (~Adium@2601:197:101:5cc1:29c0:6ecf:e6b9:4ab1) has joined #ceph
[1:45] * jermudgeon (~jermudgeo@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[1:46] * braderhart (sid124863@braderhart.user.oftc.net) has joined #ceph
[1:47] * EinstCrazy (~EinstCraz@116.238.122.20) Quit (Ping timeout: 480 seconds)
[1:48] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) Quit (Ping timeout: 480 seconds)
[1:49] * Skaag (~lunix@162.211.147.250) Quit (Quit: Leaving.)
[1:52] * jermudgeon (~jermudgeo@31.207.56.59) has joined #ceph
[1:53] * salwasser (~Adium@2601:197:101:5cc1:29c0:6ecf:e6b9:4ab1) Quit (Quit: Leaving.)
[1:54] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:2038:c77a:4100:5349) Quit (Ping timeout: 480 seconds)
[2:14] * jermudgeon_ (~jermudgeo@31.207.58.136) has joined #ceph
[2:16] * jermudgeon__ (~jermudgeo@31.207.58.136) has joined #ceph
[2:16] * jermudgeon (~jermudgeo@31.207.56.59) Quit (Ping timeout: 480 seconds)
[2:16] * jermudgeon__ is now known as jermudgeon
[2:19] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) has joined #ceph
[2:22] * jermudgeon_ (~jermudgeo@31.207.58.136) Quit (Ping timeout: 480 seconds)
[2:24] * davidzlap (~Adium@2605:e000:1313:8003:b11f:ca14:b4d7:9e9a) Quit (Ping timeout: 480 seconds)
[2:26] * bassam (sid154933@id-154933.brockwell.irccloud.com) has joined #ceph
[2:26] * joshd is now known as joshd|gone
[2:31] * zeestrat (sid176159@id-176159.brockwell.irccloud.com) has joined #ceph
[2:39] * jermudgeon_ (~jermudgeo@gw1.ttp.biz.whitestone.link) has joined #ceph
[2:41] * jermudgeon (~jermudgeo@31.207.58.136) Quit (Ping timeout: 480 seconds)
[2:41] * jermudgeon_ is now known as jermudgeon
[2:45] * Racpatel (~Racpatel@2601:87:3:31e3::34db) Quit (Ping timeout: 480 seconds)
[2:45] * KindOne_ (kindone@h4.129.30.71.dynamic.ip.windstream.net) has joined #ceph
[2:49] * om (~om@pool-108-16-60-84.phlapa.fios.verizon.net) has joined #ceph
[2:51] <om> Hi all. Does ceph require low latency for performance? I need a mountable fs that can auto-replicate between nodes that are in Europe and US for the same fs. Is that realistic? I know glusterfs is not good for this, and hadoop hdfs can handle it a bit better than glusterfs. Just want to see if ceph is a good option...
[2:51] * KindOne (kindone@0001a7db.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:51] * KindOne_ is now known as KindOne
[2:51] <om> Basically, is ceph blocking or non-blocking?
[2:52] <om> And is it appropriate to have a ceph fs span between nodes in usa and in europe for same fs
[2:54] * Racpatel (~Racpatel@2601:87:3:31e3:4e34:88ff:fe87:9abf) has joined #ceph
[2:58] * cyphase (~cyphase@000134f2.user.oftc.net) Quit (Ping timeout: 480 seconds)
[2:59] * cyphase (~cyphase@000134f2.user.oftc.net) has joined #ceph
[3:01] <SamYaple> om: nothing is good for that. no posix filesystem is going to go across the globe _well_. That said, there are people that do cephfs over WAN. its just really really slow
[3:01] <SamYaple> should be safe
[3:02] <SamYaple> its funny, this is one of those times that the speed of light actually matters
[3:03] <om> it doesn't have to be fast to replicate
[3:03] <om> just needs to be non-blocking when reading the fs
[3:03] <om> glusterfs, blocks reads when you need a file from a node until it can verify with the full cluster that it's in the same state.... or something like that
[3:03] * Aramande_ (~Nanobot@exit0.liskov.tor-relays.net) has joined #ceph
[3:04] <SamYaple> i dont think thats actually whats happening in glusterfs, but i imagine youll have teh same trouble with ceph
[3:04] <om> oh...
[3:04] <om> Object storage doesn't have that problem.
[3:05] <SamYaple> there is no mechanism to specify DNS based reading, so likely you will be reading across the globe
[3:05] <SamYaple> object storage isnt posix based. nor is it block based
[3:05] <SamYaple> its object based
[3:05] <om> right... I would use object storage, but the problem is being able to mount it
[3:06] <SamYaple> there are solutions out there to overlay and make object storage mountable and posix-like
[3:06] <om> swift doesn't seem to have a reliable fuse mount application. And s3 is not an option
[3:06] <SamYaple> but anyone that implements it well will have teh same issue
[3:06] <om> I would use s3 but it's not on the table
[3:06] * jermudgeon (~jermudgeo@gw1.ttp.biz.whitestone.link) Quit (Quit: jermudgeon)
[3:06] <SamYaple> s3 doesnt have what you are asking for htough....
[3:06] <SamYaple> what is your usecase here?
[3:06] <om> s3 mount?
[3:07] <om> can't use s3 anyway... :P
[3:07] <om> so, I need a mountable fs. Because the fs will be service a directory for chroot
[3:08] <om> can't chroot into object storage unless it's fuse mounted...
[3:08] <om> haven't found a working dependable and stable fuse mount for swift (s3 is off the table).
[3:09] <om> So perhaps it's back to HDFS hadoop
[3:09] <om> because they do have a stable dependable fuse mount application POSIX compliant.
[3:10] <SamYaple> s3 mount is not posix, you know that right?
[3:11] <SamYaple> if sounds like you just want something represented on the local filesystem in a dir structure
[3:11] <om> point is, the nodes just need to replicate between high latency WAN. They don't mount across WAN, they mount from the local nodes in their own private net in the same dc, but those nodes need to have the fs auto-replicate across the atlantic ocean (replication performance is not a major concern, just read performance)
[3:11] <SamYaple> do you care about having multiple people use it at the same time?
[3:12] <om> oh yea, " HDFS is not a full-fledged POSIX compliant filesystem" you are right about that too
[3:12] * georgem (~Adium@69-165-135-139.dsl.teksavvy.com) has joined #ceph
[3:12] <om> multiple users do not need to access the same files at the same time
[3:12] * davidzlap1 (~Adium@cpe-172-91-154-245.socal.res.rr.com) Quit (Quit: Leaving.)
[3:12] <SamYaple> i dont think what you want exists. i also doubt it will ever exist. but i admittedly don't fully understand your requirements
[3:12] <SamYaple> i would highly recommend testing
[3:13] <om> the use case is for a sftp server with fs spanning the usa and europe dc.
[3:13] <SamYaple> andrewfs
[3:13] <SamYaple> its probably going to get you closest
[3:14] <om> Yep, heard that's one to try out. Thanks. Will ask in their irc. I already built it with glusterfs and have perf tweaks, but the reads are just too slow. Object storage mount is best I believe....
[3:14] <SamYaple> since you dont care about strong consistency (reads could be different based on where you are in the world) andrewfs might be best here
[3:15] <SamYaple> plus, its like 40 years old or something. thats gotta be a plus
[3:15] <om> not sure, but object storage mounts that connect to local nodes will work fine.
[3:15] <om> yea, that's my age!
[3:15] <om> lol
[3:15] * davidzlap (~Adium@2605:e000:1313:8003:4f2:3051:abbf:cdde) has joined #ceph
[3:15] <om> because object storage replicates in a queue without blocking io, I believe
[3:16] <SamYaple> you know that object storage will mean you may have minutes/hours between yours nodes. so data isnt auto synchronized right?
[3:16] <om> a block level fs does block io
[3:16] <om> minutes/hours?
[3:16] <om> really?
[3:16] <SamYaple> yes. it can take hours to replicate around the world depending on your setup
[3:17] <om> I think auto replication shouldn't take longer than a minute for files smaller than 10 or even 100 MB...
[3:17] <om> but have no proof of it either...
[3:17] <SamYaple> i suggest you identify your exact requirements for a backend before proceeding
[3:18] <SamYaple> there are 1000's of options
[3:18] <SamYaple> im sure one is sutible
[3:18] <om> thanks. I have. I am in the search now.
[3:18] <SamYaple> cool :)
[3:18] <om> let me list them, if you are interested...
[3:19] * nilez (~nilez@104.129.29.42) Quit (Ping timeout: 480 seconds)
[3:20] <om> 1 - Autoreplication (performance is not crucial here)
[3:20] <om> 2 - Mountable on linux (preferably windows too)
[3:20] <om> 3 - Reads and writes should not be blocking io on the cluster. (read performance is critical, should not require cluster checks or quorum checks to verify reads and just supply the file from the mount that is connected to the local node)
[3:21] <om> To me, mountable object storage seems the way. But I am open and thankful for any other ideas.
[3:23] <om> so goes back to my question of, what mountable HA fs can do this?
[3:23] <om> does ceph block io while it checks quorom with all nodes?
[3:25] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[3:33] * Aramande_ (~Nanobot@exit0.liskov.tor-relays.net) Quit ()
[3:36] * WedTM (~Epi@cloud.tor.ninja) has joined #ceph
[3:41] * davidzlap (~Adium@2605:e000:1313:8003:4f2:3051:abbf:cdde) Quit (Quit: Leaving.)
[3:43] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Quit: Leaving...)
[3:51] * mhack (~mhack@24-151-36-149.dhcp.nwtn.ct.charter.com) Quit (Remote host closed the connection)
[3:54] * EinstCrazy (~EinstCraz@58.247.119.250) has joined #ceph
[4:01] * jowilkin (~jowilkin@184-23-213-254.fiber.dynamic.sonic.net) Quit (Quit: Leaving)
[4:04] * jfaj (~jan@2003:84:ad33:2900:6af7:28ff:fe67:77ff) Quit (Ping timeout: 480 seconds)
[4:06] * WedTM (~Epi@cloud.tor.ninja) Quit ()
[4:13] * flisky (~Thunderbi@106.38.61.183) has joined #ceph
[4:13] * jfaj (~jan@p20030084AD6F2D006AF728FFFE6777FF.dip0.t-ipconnect.de) has joined #ceph
[4:23] * atheism (~atheism@182.48.117.114) Quit (Remote host closed the connection)
[4:23] * atheism (~atheism@182.48.117.114) has joined #ceph
[4:39] * Jeffrey4l (~Jeffrey@110.252.64.206) has joined #ceph
[4:43] * Jeffrey4l_ (~Jeffrey@110.252.64.206) has joined #ceph
[4:47] <SamYaple> om: my day is over, but if you dont get an answer, feel free to pm me sometime
[5:11] * aarcane (~aarcane@108-208-206-178.lightspeed.irvnca.sbcglobal.net) has joined #ceph
[5:12] <aarcane> Anyone got any leads on ceph related jobs? I love ceph, but due to the hardware requirements on a *functional* cluster, maintaining and developing it at home is... challenging.
[5:14] * nilez (~nilez@ec2-52-37-170-77.us-west-2.compute.amazonaws.com) has joined #ceph
[5:18] <rkeene> I run a functional Ceph cluster as VMs on my laptop (as part of my job, we build Ceph appliances)
[5:20] * kefu (~kefu@li401-71.members.linode.com) has joined #ceph
[5:21] <aarcane> rkeene, I can get a ceph cluster up.. but I can't watch it as it grows and evolves organically. I can't see how it handles disk failures after running for two or three years with mis-matched versions across the nodes because the cluster is mid-upgrade... I can't get experience with debugging why a single cluster node is performing poorly only when 1000 clients try to log in in the morning, but otherwise seems fine..
[5:21] <aarcane> There are things you can do in a handful of VMs, and things you can't.
[5:22] <rkeene> I can definitely run different versions of Ceph after simulating workloads and then yank disks
[5:23] * yanzheng1 (~zhyan@125.70.23.12) has joined #ceph
[5:23] <rkeene> One of my automatic tests, for every single change I make, is an upgrade test from one release of my product to the newest trunk
[5:26] <aarcane> rkeene, okay, tell me rkeene.. How do you simulate a generational workload. Something that's been running for years.. seen 2-3 versions of ceph, and been overwritten and freed a dozen times, and is just now beginning to experience silent data corruption since the last scrub? I totally get that you can test a lot of basic features in a VM on the regular.. That's not a problem. What's impossible to do (on my budget at
[5:26] <aarcane> least...) is get your hands elbow deep in bare metal cluster with the types of unpredictable circumstances you encounter in the wild. I love that stuff. I thrive on that stuff. I want a job *doing* that stuff.. but for now, the best I can do is set up a bunch of VMs that aren't very useful because they all share the same small pool of disks and are only on a virtual network..
[5:27] <aarcane> I don't get to see a failing infiniband cable on my home budget
[5:28] <aarcane> I don't get to try to diagnose which component is failing on a node
[5:28] <aarcane> I can basically set it up and tear it down
[5:28] <rkeene> I can easily test upgrading multiple times, through multiple versions of Ceph -- and I do simulate workloads. Ceph is only part of the product, the rest is QEMU and OpenNebula, so during the upgrade test many VMs are running.
[5:29] <aarcane> It sounds like you have a decent job
[5:29] <aarcane> Is your company hiring?
[5:29] <rkeene> (Ceph cluster is built for the Storage Cloud, then the Compute Cluster, which has no disks, then one node in the Compute Cluster self-selects to run the OpenNebula node, then that node will learn about all compute nodes and put workloads on them)
[5:31] <rkeene> It drives almost all this over the serial console for the VMs, since I run my own Linux distribution I test all of it
[5:31] <rkeene> We're going to be hiring in the new year
[5:31] <rkeene> It would be great to have someone take care of Ceph since I mostly work on everything else... My main complaint about Ceph is about how terribly slow it is
[5:32] <aarcane> Bah. I have outside pressure to find a job before the year is up.. But Are you accepting resumes yet?
[5:32] <rkeene> Sure, but we don't have an open req for this team -- we do have many jobs for other teams
[5:41] * flisky1 (~Thunderbi@106.38.61.190) has joined #ceph
[5:42] * flisky (~Thunderbi@106.38.61.183) Quit (Read error: Connection reset by peer)
[5:42] * flisky1 is now known as flisky
[5:45] * Jeffrey4l_ (~Jeffrey@110.252.64.206) Quit (Quit: Leaving)
[5:46] * georgem (~Adium@69-165-135-139.dsl.teksavvy.com) Quit (Quit: Leaving.)
[5:55] * Vacuum__ (~Vacuum@i59F7942F.versanet.de) has joined #ceph
[6:02] * Vacuum_ (~Vacuum@i59F79C7C.versanet.de) Quit (Ping timeout: 480 seconds)
[6:06] * om (~om@pool-108-16-60-84.phlapa.fios.verizon.net) Quit (Quit: Leaving)
[6:06] * om (~om@pool-108-16-60-84.phlapa.fios.verizon.net) has joined #ceph
[6:11] * walcubi (~walcubi@p5795BD4C.dip0.t-ipconnect.de) Quit (Ping timeout: 480 seconds)
[6:11] * walcubi (~walcubi@p5797A886.dip0.t-ipconnect.de) has joined #ceph
[6:21] * BillyBobJohn (~Atomizer@tsn109-201-154-199.dyn.nltelcom.net) has joined #ceph
[6:21] * om (~om@pool-108-16-60-84.phlapa.fios.verizon.net) Quit (Ping timeout: 480 seconds)
[6:24] * Racpatel (~Racpatel@2601:87:3:31e3:4e34:88ff:fe87:9abf) Quit (Read error: Connection timed out)
[6:25] * blahdodo (~blahdodo@69.172.164.248) has joined #ceph
[6:44] * nilez (~nilez@ec2-52-37-170-77.us-west-2.compute.amazonaws.com) Quit (Ping timeout: 480 seconds)
[6:46] * lxo (~aoliva@lxo.user.oftc.net) has joined #ceph
[6:51] * BillyBobJohn (~Atomizer@tsn109-201-154-199.dyn.nltelcom.net) Quit ()
[6:53] * nilez (~nilez@ec2-52-37-170-77.us-west-2.compute.amazonaws.com) has joined #ceph
[6:56] * Shadow386 (~aldiyen@normalcitizen.spirosandreou.com) has joined #ceph
[7:09] * mattbenjamin (~mbenjamin@rrcs-70-60-132-138.central.biz.rr.com) has joined #ceph
[7:22] * kefu_ (~kefu@114.92.125.128) has joined #ceph
[7:26] * Shadow386 (~aldiyen@normalcitizen.spirosandreou.com) Quit ()
[7:28] * kefu (~kefu@li401-71.members.linode.com) Quit (Ping timeout: 480 seconds)
[7:40] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) has joined #ceph
[7:43] * mattbenjamin (~mbenjamin@rrcs-70-60-132-138.central.biz.rr.com) Quit (Ping timeout: 480 seconds)
[7:55] * kefu_ (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[7:56] * kefu (~kefu@114.92.125.128) has joined #ceph
[8:10] * flisky (~Thunderbi@106.38.61.190) Quit (Quit: flisky)
[8:15] * Miouge (~Miouge@208.143-65-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[8:31] * Miouge (~Miouge@208.143-65-87.adsl-dyn.isp.belgacom.be) Quit (Quit: Miouge)
[8:33] * doppelgrau (~doppelgra@dslb-088-072-094-200.088.072.pools.vodafone-ip.de) has joined #ceph
[8:44] * Skaag (~lunix@cpe-172-91-77-84.socal.res.rr.com) Quit (Quit: Leaving.)
[8:49] * om (~om@2600:1002:b00c:e20c:d4a:382e:e418:2882) has joined #ceph
[8:57] * llua (~arx@six.happyforever.com) Quit (Ping timeout: 480 seconds)
[8:59] * [arx] (~arx@the.kittypla.net) has joined #ceph
[9:05] * om2 (~om@52.sub-174-201-12.myvzw.com) has joined #ceph
[9:07] * mykola (~Mikolaj@91.245.75.214) has joined #ceph
[9:09] * om2 (~om@52.sub-174-201-12.myvzw.com) Quit ()
[9:10] * om2 (~om@52.sub-174-201-12.myvzw.com) has joined #ceph
[9:12] * om (~om@2600:1002:b00c:e20c:d4a:382e:e418:2882) Quit (Ping timeout: 480 seconds)
[9:12] * om2 (~om@52.sub-174-201-12.myvzw.com) Quit (Read error: Connection reset by peer)
[9:30] * kuku (~kuku@112.203.59.175) has joined #ceph
[9:30] * stiopa (~stiopa@cpc73832-dals21-2-0-cust453.20-2.cable.virginm.net) has joined #ceph
[9:35] * Nicho1as (~nicho1as@00022427.user.oftc.net) has joined #ceph
[9:43] * Miouge (~Miouge@208.143-65-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[9:54] * sleinen (~Adium@2001:620:0:69::101) has joined #ceph
[10:02] * sleinen (~Adium@2001:620:0:69::101) Quit (Ping timeout: 480 seconds)
[10:06] * Miouge (~Miouge@208.143-65-87.adsl-dyn.isp.belgacom.be) Quit (Quit: Miouge)
[10:06] * Miouge (~Miouge@208.143-65-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[10:42] * Miouge (~Miouge@208.143-65-87.adsl-dyn.isp.belgacom.be) Quit (Quit: Miouge)
[10:52] * kuku (~kuku@112.203.59.175) Quit (Remote host closed the connection)
[10:53] * nathani1 (~nathani@frog.winvive.com) Quit (Read error: Connection reset by peer)
[10:53] * nathani1 (~nathani@2607:f2f8:ac88::) has joined #ceph
[10:57] * kuku (~kuku@112.203.59.175) has joined #ceph
[11:03] * kuku (~kuku@112.203.59.175) Quit (Remote host closed the connection)
[11:35] * kuku (~kuku@112.203.59.175) has joined #ceph
[11:38] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:2038:c77a:4100:5349) has joined #ceph
[11:42] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[11:51] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:2038:c77a:4100:5349) Quit (Ping timeout: 480 seconds)
[11:56] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) has joined #ceph
[12:05] * [arx] (~arx@the.kittypla.net) Quit (Ping timeout: 480 seconds)
[12:09] * [arx] (~arx@six.happyforever.com) has joined #ceph
[12:09] * nardial (~ls@p5DC07246.dip0.t-ipconnect.de) has joined #ceph
[12:10] * kuku (~kuku@112.203.59.175) Quit (Remote host closed the connection)
[12:23] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[12:28] * mgolub (~Mikolaj@91.245.74.58) has joined #ceph
[12:32] * kuku (~kuku@112.203.59.175) has joined #ceph
[12:33] * mykola (~Mikolaj@91.245.75.214) Quit (Ping timeout: 480 seconds)
[12:37] * yanzheng1 (~zhyan@125.70.23.12) Quit (Quit: This computer has gone to sleep)
[12:38] * Redshift1 (~Mraedis@93.115.95.201) has joined #ceph
[12:39] * [0x4A6F]_ (~ident@p4FC269C7.dip0.t-ipconnect.de) has joined #ceph
[12:42] * [0x4A6F] (~ident@0x4a6f.user.oftc.net) Quit (Ping timeout: 480 seconds)
[12:42] * [0x4A6F]_ is now known as [0x4A6F]
[12:44] * The_Ball (~pi@20.92-221-43.customer.lyse.net) Quit (Quit: Leaving)
[12:45] * The_Ball (~pi@20.92-221-43.customer.lyse.net) has joined #ceph
[12:46] * EinstCrazy (~EinstCraz@58.247.119.250) Quit (Remote host closed the connection)
[12:46] * kuku (~kuku@112.203.59.175) Quit (Read error: Connection reset by peer)
[12:48] * kuku (~kuku@112.203.59.175) has joined #ceph
[12:52] * dgurtner (~dgurtner@84-73-130-19.dclient.hispeed.ch) Quit (Ping timeout: 480 seconds)
[13:01] * kefu (~kefu@114.92.125.128) has joined #ceph
[13:02] * Discovery (~Discovery@109.235.52.3) has joined #ceph
[13:07] * [arx] (~arx@six.happyforever.com) Quit (Ping timeout: 480 seconds)
[13:08] * Redshift1 (~Mraedis@93.115.95.201) Quit ()
[13:18] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[13:20] * lxo (~aoliva@lxo.user.oftc.net) Quit (Ping timeout: 480 seconds)
[13:21] * [arx] (~arx@six.happyforever.com) has joined #ceph
[13:29] * kuku (~kuku@112.203.59.175) Quit (Remote host closed the connection)
[13:36] * wjw-freebsd (~wjw@smtp.digiware.nl) has joined #ceph
[13:52] * wjw-freebsd (~wjw@smtp.digiware.nl) Quit (Ping timeout: 480 seconds)
[14:22] * nardial (~ls@p5DC07246.dip0.t-ipconnect.de) Quit (Quit: Leaving)
[14:45] * salwasser (~Adium@c-73-219-86-22.hsd1.ma.comcast.net) has joined #ceph
[14:45] * salwasser (~Adium@c-73-219-86-22.hsd1.ma.comcast.net) Quit ()
[14:47] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[14:52] * kefu (~kefu@114.92.125.128) has joined #ceph
[15:09] <fusl> does anyone know why ceph-disk fails with AssertionError when running "/usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /dev/md3p2 /dev/md3p1"? https://scr.meo.ws/paste/1475932145071964452.txt
[15:11] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[15:14] * salwasser (~Adium@2601:197:101:5cc1:102c:ac06:26e4:5cfb) has joined #ceph
[15:16] * salwasser (~Adium@2601:197:101:5cc1:102c:ac06:26e4:5cfb) Quit ()
[15:20] * minnesotags (~herbgarci@c-50-137-242-97.hsd1.mn.comcast.net) Quit (Read error: Connection reset by peer)
[15:20] * [arx] (~arx@six.happyforever.com) Quit (Ping timeout: 480 seconds)
[15:23] * [arx] (~arx@six.happyforever.com) has joined #ceph
[15:24] <fusl> hmm, seems i accidentally partitioned the disks wrong, using f800 for both partitions fixed it
[15:35] * minnesotags (~herbgarci@c-50-137-242-97.hsd1.mn.comcast.net) has joined #ceph
[15:40] * [arx] (~arx@six.happyforever.com) Quit (Ping timeout: 480 seconds)
[15:45] * [arx] (~arx@the.kittypla.net) has joined #ceph
[15:50] * sleinen (~Adium@2001:620:0:82::102) has joined #ceph
[15:57] * erwan_taf (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[15:59] * shaunm (~shaunm@m995a36d0.tmodns.net) has joined #ceph
[15:59] * The_Ball (~pi@20.92-221-43.customer.lyse.net) Quit (Read error: Connection reset by peer)
[16:06] * The_Ball (~pi@20.92-221-43.customer.lyse.net) has joined #ceph
[16:13] * shaunm (~shaunm@m995a36d0.tmodns.net) Quit (Ping timeout: 480 seconds)
[16:22] * sleinen1 (~Adium@2001:620:0:69::100) has joined #ceph
[16:24] * sleinen (~Adium@2001:620:0:82::102) Quit (Ping timeout: 480 seconds)
[16:27] * kefu (~kefu@114.92.125.128) has joined #ceph
[16:32] * erwan_taf (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Ping timeout: 480 seconds)
[16:32] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) has joined #ceph
[16:35] * sleinen1 (~Adium@2001:620:0:69::100) Quit (Ping timeout: 480 seconds)
[16:39] * Tumm (~ahmeni@tor-exit.squirrel.theremailer.net) has joined #ceph
[16:48] * kefu (~kefu@114.92.125.128) Quit (Max SendQ exceeded)
[16:49] * kefu (~kefu@211.22.145.245) has joined #ceph
[16:54] * Discovery (~Discovery@109.235.52.3) Quit (Ping timeout: 480 seconds)
[16:56] * Discovery (~Discovery@109.235.52.3) has joined #ceph
[16:58] * kefu (~kefu@211.22.145.245) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[17:09] * Tumm (~ahmeni@tor-exit.squirrel.theremailer.net) Quit ()
[17:23] * erwan_taf (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) has joined #ceph
[17:32] * spidu_ (~puvo@212.83.40.239) has joined #ceph
[17:47] * kefu (~kefu@114.92.125.128) has joined #ceph
[18:02] * spidu_ (~puvo@212.83.40.239) Quit ()
[18:06] * erwan_taf (~erwan@2a01:e34:eecb:7400:4eeb:42ff:fedc:8ac) Quit (Ping timeout: 480 seconds)
[18:09] * BrianA (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) has joined #ceph
[18:31] * Miouge (~Miouge@208.143-65-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[18:46] * newbie (~kvirc@host217-114-156-249.pppoe.mark-itt.net) has joined #ceph
[18:50] * Miouge (~Miouge@208.143-65-87.adsl-dyn.isp.belgacom.be) Quit (Quit: Miouge)
[18:51] * ivve (~zed@c83-254-25-170.bredband.comhem.se) has joined #ceph
[18:53] * Miouge (~Miouge@208.143-65-87.adsl-dyn.isp.belgacom.be) has joined #ceph
[19:14] * BrianA1 (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) has joined #ceph
[19:14] * phyphor (~jakekosbe@178-175-128-50.static.host) has joined #ceph
[19:17] * BrianA (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[19:17] * kefu (~kefu@114.92.125.128) Quit (Quit: My Mac has gone to sleep. ZZZzzz???)
[19:17] * BrianA1 (~BrianA@c-73-189-153-151.hsd1.ca.comcast.net) Quit ()
[19:28] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[19:34] * funnel (~funnel@81.4.123.134) Quit (Quit: leaving)
[19:36] * funnel (~funnel@0001c7d4.user.oftc.net) has joined #ceph
[19:40] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[19:40] * ivve (~zed@c83-254-25-170.bredband.comhem.se) Quit (Ping timeout: 480 seconds)
[19:44] * phyphor (~jakekosbe@178-175-128-50.static.host) Quit ()
[19:50] * Miouge (~Miouge@208.143-65-87.adsl-dyn.isp.belgacom.be) Quit (Ping timeout: 480 seconds)
[19:51] * Miouge (~Miouge@109.129.181.76) has joined #ceph
[20:01] * bniver (~bniver@71-9-144-29.static.oxfr.ma.charter.com) Quit (Remote host closed the connection)
[20:03] * John341 (~ceph@118.200.221.105) Quit (Ping timeout: 480 seconds)
[20:06] * John341 (~ceph@118.200.221.105) has joined #ceph
[20:12] * Nicho1as (~nicho1as@00022427.user.oftc.net) Quit (Quit: A man from the Far East; using WeeChat 1.5)
[20:16] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:2038:c77a:4100:5349) has joined #ceph
[20:21] * guerby (~guerby@ip165.tetaneutral.net) Quit (Quit: Leaving)
[20:23] * guerby (~guerby@ip165.tetaneutral.net) has joined #ceph
[20:31] * technil (~technil@host.cctv.org) has joined #ceph
[21:11] <thoht> anybody tried to use ceph-osd with sata zfs backend + zil/l2arc ssd ?
[21:12] * Miouge (~Miouge@109.129.181.76) Quit (Quit: Miouge)
[21:13] * Vacuum_ (~Vacuum@88.130.192.120) has joined #ceph
[21:19] * Miouge (~Miouge@109.129.181.76) has joined #ceph
[21:19] * Vacuum__ (~Vacuum@i59F7942F.versanet.de) Quit (Ping timeout: 480 seconds)
[21:20] * Miouge (~Miouge@109.129.181.76) Quit ()
[21:20] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) Quit (Ping timeout: 480 seconds)
[21:24] * fdmanana (~fdmanana@2001:8a0:6e0c:6601:2038:c77a:4100:5349) Quit (Ping timeout: 480 seconds)
[21:30] * mgolub (~Mikolaj@91.245.74.58) Quit (Quit: away)
[21:31] * Miouge (~Miouge@109.129.181.76) has joined #ceph
[21:32] * debian112 (~bcolbert@c-73-184-103-26.hsd1.ga.comcast.net) has joined #ceph
[21:32] * Miouge (~Miouge@109.129.181.76) Quit ()
[21:34] * Miouge (~Miouge@109.129.181.76) has joined #ceph
[21:34] * Miouge (~Miouge@109.129.181.76) Quit (Remote host closed the connection)
[21:35] * technil (~technil@host.cctv.org) Quit (Ping timeout: 480 seconds)
[21:36] * xul (~Bored@tsn109-201-154-139.dyn.nltelcom.net) has joined #ceph
[21:38] * haplo37 (~haplo37@107.190.42.94) has joined #ceph
[21:45] * technil (~technil@host.cctv.org) has joined #ceph
[22:04] * technil (~technil@host.cctv.org) Quit (Quit: Ex-Chat)
[22:06] * xul (~Bored@tsn109-201-154-139.dyn.nltelcom.net) Quit ()
[22:15] * EdGruberman (~Tumm@anonymous6.sec.nl) has joined #ceph
[22:40] * Discovery (~Discovery@109.235.52.3) Quit (Read error: Connection reset by peer)
[22:45] * EdGruberman (~Tumm@anonymous6.sec.nl) Quit ()
[22:52] * sleinen (~Adium@2001:620:0:69::100) has joined #ceph
[23:46] * sleinen (~Adium@2001:620:0:69::100) Quit (Ping timeout: 480 seconds)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.