#ceph IRC Log

Index

IRC Log for 2011-01-08

Timestamps are in GMT/BST.

[0:00] <cmccabe> ajnelson: then it depends on what logging you have set up in the configuration file.
[0:00] <ajnelson> cmccabe: erm, are you referring to unstable work, or to v0.24?
[0:00] <cmccabe> ajnelson: 0.24
[0:01] <cmccabe> ajnelson: in unstable, foreground programs always log to... er... the foreground
[0:01] <ajnelson> Ok. Is there a client section in the ceph.conf?
[0:01] * gnp421 (~hutchint@c-75-71-83-44.hsd1.co.comcast.net) has joined #ceph
[0:01] <gregaf> I don't know if you can actually set synclient options in the conf file
[0:01] <ajnelson> Is there a cfuse section for ceph.conf?
[0:01] <cmccabe> ajnelson, try applying this patch... 1sec
[0:04] <gregaf> ajnelson: oh, yep, there is!
[0:04] <gregaf> just create a section called cfuse like the others :)
[0:04] <ajnelson> gregaf: Cool! Is there a debug name for cfuse?
[0:04] <ajnelson> or just "debug cfuse"?
[0:04] <gregaf> all the cfuse debugging is in the Client
[0:04] <gregaf> "debug client"
[0:05] <cmccabe> diff --git a/src/csyn.cc b/src/csyn.cc
[0:05] <cmccabe> index 9c5303c..36ceca7 100644
[0:05] <cmccabe> --- a/src/csyn.cc
[0:05] <cmccabe> +++ b/src/csyn.cc
[0:05] <cmccabe> @@ -49,6 +49,16 @@ int main(int argc, const char **argv, char *envp[])
[0:05] <cmccabe> parse_syn_options(args); // for SyntheticClient
[0:05] <cmccabe>
[0:05] <cmccabe> vec_to_argv(args, argc, argv);
[0:05] <cmccabe> + free(g_conf.log_file);
[0:05] <cmccabe> + g_conf.log_file = NULL;
[0:05] <cmccabe> + free(g_conf.log_dir);
[0:05] <cmccabe> + g_conf.log_dir = NULL;
[0:05] <cmccabe> + free(g_conf.log_sym_dir);
[0:05] <cmccabe> + g_conf.log_sym_dir = NULL;
[0:05] <cmccabe> + g_conf.log_sym_history = 0;
[0:05] <cmccabe> + g_conf.log_to_stdout = true
[0:05] <cmccabe> + g_conf.log_to_syslog = false
[0:05] <cmccabe> + g_conf.log_per_instance = false;
[0:05] <cmccabe>
[0:05] <cmccabe> if (g_conf.clock_tare) g_clock.tare();
[0:05] <cmccabe> to get synclient to log to the foreground
[0:06] <cmccabe> I don't know if synclient still works any more though...
[0:06] <cmccabe> nowadays we are testing via cfuse, the kernel client, librados, and some other interfaces, not really with synclient.
[0:06] <ajnelson> cmccabe: It works insofar as what's on the wiki doesn't cause a crash. ;) I'm not sure what its "write" command does, though. I'm using synclient to test a modification I'm making to cfuse.
[0:07] <ajnelson> *to client, sorry.
[0:08] <gnp421> Hi, I have a question, it is stupid but, has anyone thought of making a windows ceph client?
[0:08] <gregaf> thought of? yes
[0:09] <gregaf> seriously considered? no
[0:09] <Tv|work> in theory you could make the cfuse client work on windows..
[0:09] <gregaf> is there a fuse for windows I've never heard of?
[0:09] <Tv|work> there's a bunch of hacks
[0:10] <Tv|work> like serving SMB in the end
[0:10] <cmccabe> gregaf: windows has some kind of filesystem-from-userspace interface; I don't think it's called fuse
[0:10] <Tv|work> but attempts at following the api exist; i'd expect quality to be very low..
[0:10] <gregaf> anyway, it's not very feasible and given the workloads Ceph is targeted at I think the payoff would unfortunately be pretty low
[0:11] <cmccabe> ajnelson: arg, you need to typecast those pointers to void before calling free on them
[0:11] <cmccabe> ajnelson: so free((void*)g_conf.log_file) rather than free(g_conf.log_file)
[0:11] <ajnelson> cmccabe: I also couldn't find that commit in unstable.
[0:11] <gnp421> I understand. It's just I think it'd be great cross-platform wise
[0:12] <Tv|work> there are very few things slower than windows filesystems ;)
[0:12] <cmccabe> ajnelson: it was a set of commits, not very easy to port
[0:12] <ajnelson> Gotcha.
[0:12] <cmccabe> 8adaa0478a94b5f731ee77e12a8dac29e3a7b46a was the main one
[0:13] <cmccabe> ajnelson: it's probably more practical to patch the just foreground programs that you need to force them to log to stdout
[0:13] <cmccabe> ajnelson: with a patch like the one I posted.
[0:14] <gnp421> Tv: I am aware. Its just Windows implements NFSv3 currently and it's pretty darn close to linux NFS speeds
[0:14] <cmccabe> ajnelson: alternately, you could rebase on to unstable, which is where most development activity is not
[0:14] <ajnelson> cmccabe: I agree, just taking your lines and pasting them in.
[0:14] <cmccabe> *now
[0:14] <ajnelson> cmccabe: should pasting fail, I'll try rebasing.
[0:14] <cmccabe> gnp421: last I heard, Windows had no native NFS, and you had to download NFSAxe or similar
[0:14] <cmccabe> gnp421: and speeds for NFSAxe were terrible
[0:14] <cmccabe> gnp421: have they rolled it into the core OS now?
[0:15] <gnp421> NFSv3 is native in Windows Server 2008 and up. 2003 had NFSv2
[0:15] <gnp421> yea
[0:15] * joshd (~joshd@ip-66-33-206-8.dreamhost.com) Quit (Quit: Leaving.)
[0:16] <cmccabe> gnp421: it's kind of weird that they would even do that, since usually Windows tries to force you to use proprietary filesystems (SMB, NTFS)
[0:16] <cmccabe> gnp421: I guess having NFS3 in 2010 was considered unthreatening enough, heh
[0:17] <gnp421> In Windows 8 they are moving to NFS4.
[0:17] <gnp421> They have a problem with NFS user-mapping but unmapped access performance is on par
[0:17] <cmccabe> gnp421: nfs 4.1 / pNFS is where the good performance is going to be at
[0:18] <gnp421> that's the implemented (4.1) they are going for
[0:18] <cmccabe> gnp421: but that's still not implemented by everyone, or wasn't when I last checked
[0:18] <Tv|work> mmm pNFS
[0:18] <Tv|work> btw anyone have a writeup of pNFS vs ceph?
[0:18] <cmccabe> gnp421: netapp and EMC are all over it
[0:18] <cmccabe> tv: NFS has a single metadata server
[0:19] <Tv|work> cmccabe: succinct reply ;)
[0:19] <cmccabe> tv: it's not a clustered filesystem, it just has a separate datapath
[0:19] <Tv|work> cmccabe: separate datapath = good, but yeah i get the point, it's gonna bottleneck at metadata operations
[0:19] <cmccabe> tv: so you can have multiple machines storing data, kind of like OSDs
[0:19] <cmccabe> tv: I'm not trying to dismiss it too much; it will be a big improvement
[0:19] <Tv|work> so my thinking is
[0:19] <cmccabe> tv: but not really comparable with lustre/ceph/glusterfs
[0:20] <Tv|work> whether you have one or more pNFS metadata servers is an implementation detail
[0:20] <Tv|work> i've already dealt with clustered-NFS-servers that served it from multiple IPs, on multiple nodes
[0:21] <cmccabe> tv: you can have a parallel backend, but an NFS server is a singular thing
[0:21] <Tv|work> try this for a challenge: a pNFS proxy that talks to ceph native could scale without a central bottleneck
[0:21] <Tv|work> just run multiple proxies, distribute client load
[0:21] <Tv|work> locking done in ceph-space, sharded by tree
[0:21] <Tv|work> \
[0:22] <cmccabe> tv: a filesystem is best viewed as an API that provides certain guarantees
[0:22] <cmccabe> tv: NFS guarantees that if two clients take a lock, only one will get it, using its locking service
[0:23] <cmccabe> tv: the NFSv2 and NFSv3 API was such that only a single metadata server could really implement it
[0:23] <ajnelson> cmccabe: log_to_syslog doesn't exist, so it doesn't compile. I'm just going to rebase.
[0:24] <cmccabe> tv: like consider what happens if I have two NFSv3 metadata servers, and a user creates directory FOO on one, and another user creates FOO as a file on the other.
[0:24] <Tv|work> Tv|work: so they share their locks, either in a separate lock service or in ceph mds
[0:24] <cmccabe> tv: only one user can win. But how can the metadata servers tell if another server has signed off on a FOO creation?
[0:24] <Tv|work> err
[0:24] <Tv|work> cmccabe: ^
[0:25] <cmccabe> tv: you can, but do you want to do that for every operation?
[0:25] <Tv|work> cmccabe: i'm not saying naively running >1 nfs servers is going to work; but making it work does seem like "just a matter of implementation"
[0:25] <Tv|work> cmccabe: and remember, i've had terabytes on a clustered NFS already, it worked just fine
[0:25] <cmccabe> tv: what vendor
[0:25] <Tv|work> Isilon
[0:25] <cmccabe> tv: yeah, Isilon is known for their clustered filesystems
[0:26] <cmccabe> tv: I never really studied what they did though
[0:26] <Tv|work> nfs proxies to a proprietary cluster filesystem
[0:26] <Tv|work> running on *bsd
[0:26] <cmccabe> tv: so the proxy is a singular server, through which all traffic passes
[0:26] <Tv|work> every node is a proxy
[0:26] <cmccabe> tv: and then there are cluster nodes on the backend handling the load
[0:26] <Tv|work> every node is a cluster node
[0:27] <cmccabe> tv: or is the proxying done by some more sophisticated mechanism
[0:27] <cmccabe> tv: like DNS round-robin?
[0:27] <Tv|work> (well they did introduce pure-storage nodes etc, but that was more about trading cpu off for more hard drives)
[0:27] <Tv|work> cmccabe: if you want; or just mount one of them
[0:27] <cmccabe> tv: so basically the recommended way would be dns round-robin.
[0:27] <Tv|work> the downside is more about availability than scalability
[0:28] <cmccabe> tv: why availability?
[0:28] <cmccabe> tv: as in, if one node fails, the system goes down?
[0:28] <Tv|work> if that one node goes down, it's ip address is down too
[0:28] <Tv|work> not the system
[0:28] <Tv|work> the mounts pointing to that node
[0:28] <cmccabe> tv: ic
[0:28] <Tv|work> there's plenty of tricks in the HA world around that
[0:28] <cmccabe> tv: well, it really seems like you should be using automatic load balancing rather than manual
[0:28] <Tv|work> classic hot standby has the same basic issue
[0:28] <cmccabe> tv: I just can't see any advantage to manual
[0:28] <Tv|work> you can move IPs, use virtual IPs, etc
[0:28] <cmccabe> tv: if you have enough money for Isilion surely you can hire someone who knows DNS :)
[0:29] <Tv|work> DNS is not the right solution
[0:29] <cmccabe> tv: why is that
[0:29] <Tv|work> no control
[0:29] <Tv|work> clients do whatever they want
[0:29] <cmccabe> tv: well, the idea is there's one DNS entry, and you don't know what actual machine you'll contact.
[0:29] <Tv|work> no way to resume existing connections, etc
[0:29] <cmccabe> tv: so it seems like the opposite: clients *don't* do what they want
[0:30] <Tv|work> cmccabe: i've worked in the HA space for a long time, trust me DNS is nothing but a headache
[0:30] <cmccabe> tv: NFSv2 and I think NFSv3 are stateless
[0:30] <cmccabe> tv: at least in theory
[0:30] <cmccabe> tv: NFSv4 I think finally is stateful, even in theory
[0:31] <cmccabe> tv: the stateless thing led to a lot of weirdness in the protocol, in my opinion
[0:32] <cmccabe> tv: anyway, you have actual experience with Isilon, and I don't. It's interesting that you favor manual load-balancing.
[0:32] <Tv|work> i don't favor it
[0:32] <Tv|work> that's what they provided
[0:32] <Tv|work> there'd be tricks to do on top
[0:32] <Tv|work> but it never became an issue
[0:32] <cmccabe> oh, isilon was acquired by EMC recently
[0:32] <Tv|work> besides; if one of the worker nodes failed due to NFS mount, it'd just be taken out of the pool
[0:32] <cmccabe> I thought I remembered hearing something like that
[0:33] <cmccabe> did you follow the whole DataDomain thing?
[0:34] <Tv|work> what's that?
[0:34] <cmccabe> tv: NetApp launched a bid for DataDomain, but then was outbid by EMC
[0:34] <Tv|work> if hierarchical storage? not really interested
[0:34] <cmccabe> DataDomain was deduplication
[0:35] <Tv|work> s/if/oh/wth
[0:36] <cmccabe> The datadomain guys were really afraid that NetApp would win the bid
[0:36] <cmccabe> tv: the way it was explained to me is that netapp doesn't look kindly on people who leave their company and start a spinoff... and so their buying DataDomain out would be kind of a punitive thing
[0:37] <cmccabe> tv: they're lucky that EMC won, or else it would have been no bonuses for them... maybe even nothing for their stock.
[0:37] <Tv|work> hah
[0:37] * `gregorg` (~Greg@78.155.152.6) Quit (Read error: Connection reset by peer)
[0:37] * `gregorg` (~Greg@78.155.152.6) has joined #ceph
[0:37] <cmccabe> tv: a lot of people think that NetApp itself is too small to compete with IBM and EMC these days
[0:38] <Tv|work> on the other hand, big players play only the big game
[0:38] <cmccabe> tv: and I guess Sun/Oracle is going to try to get back into that business, if they can
[0:38] <Tv|work> there's money to be made by being the dropbox of storage, or something
[0:38] <cmccabe> tv: no, the consumer market is only for the chinese
[0:38] <Tv|work> for hardware, sure
[0:39] <cmccabe> tv: it's just ultra low-margins, windows installers, etc.
[0:39] <Tv|work> software/services, not so much
[0:39] <NoahWatkins> Hey guys. I'm getting a seg fault when starting cosd. I'm running 0.24 built as deb packages. Any suggestions about trying to track down the problem?
[0:39] <Tv|work> NoahWatkins: anything in the log?
[0:39] <gnp421> NetApp has a software stack that scales with ease. That is why they are doing so well
[0:41] <cmccabe> gnp421: they're trusted by corporates because they have a completely solid product with great peformance. "Scales with ease" is probably not really true, though. They only recently lifted the 16TB-per-fs limitation
[0:41] <NoahWatkins> tv: nothing striking, but here is the output: http://pastebin.com/AUK2bbHt
[0:41] <gnp421> I worded it wrong, sorry fixing a build, they have a UI consistent across all products and cross-platform
[0:42] <cmccabe> gnp421: sure
[0:43] <cmccabe> noah: perhaps turning up osd logging would help
[0:43] <gnp421> cmccabe: HP is trying to compete with netapp in the Midrange space
[0:43] <Tv|work> NoahWatkins: sounds like your best bet is http://ceph.newdream.net/wiki/Debugging and/or getting a core dump
[0:44] <cmccabe> noah: also, this log doesn't include a stack trace
[0:46] <cmccabe> noah: I suggest ulimit -c unlimited && mkdir /var/crash && echo "/var/crash/core.%s.%e.%p" > /proc/sys/kernel/core_pattern
[0:46] <cmccabe> the core should have the stack trace, even if the log doesn't for some reason
[0:47] <NoahWatkins> cmccabe: i'll give this a shot, thx. i have a core dump, but will is the stack trace in the log effected by the debug level?
[0:47] <cmccabe> noah: no
[0:47] <cmccabe> noah: the stack trace should be the same no matter what the debug level is
[0:48] <cmccabe> noah: unless the higher level of debugging is somehow triggering the erorr
[0:48] <NoahWatkins> cmccabe: alrightly, I got the cosd core dump. will gdb give me the stack trace?
[0:49] <cmccabe> noah: yep, just do gdb ./cosd
[0:49] <cmccabe> noah: then core /path/to/core
[0:49] <cmccabe> noah: then bt
[0:51] <NoahWatkins> cmccabe: here is the backtrace http://pastebin.com/RXN8BjtY
[0:52] <cmccabe> noah: I suspect that your journal config is not set, or invalid
[0:52] <jantje> hi everyone
[0:52] <NoahWatkins> the "osd journal
[0:52] <NoahWatkins> config option?
[0:52] <cmccabe> noah: yeah
[0:52] <jantje> sagewk: still here? :p
[0:53] <NoahWatkins> cmccabe: i'll see if there is an issue
[0:53] <cmccabe> noah: I think it might be related to that "btrfs START_SYNC got -1 Operation not permitted"
[0:53] <cmccabe> noah: could be that your version of btrfs is not new enough, and unfortunately our error messages are not very clear
[0:54] <cmccabe> noah: I haven't dealt with the osd journal much yet so I can't shed much light on it
[0:57] <sagewk> jantje: am now
[0:57] * ajnelson (~Adium@soenat3.cse.ucsc.edu) Quit (Quit: Leaving.)
[0:58] <NoahWatkins> cmccabe: your right about the btrfs version being old (using kernel 2.6.35). any work around ?
[0:58] * verwilst (~verwilst@dD576FAAE.access.telenet.be) has joined #ceph
[0:58] <sagewk> noah: you should be fine without the new btrfs ioctls
[0:58] <cmccabe> sagewk: he's getting some btrfs errors followed by "mount WARNING: no journal"
[0:59] <cmccabe> sagewk: then we try to deref journal and segfault
[0:59] <sagewk> but you do need to configure an osd journal :)
[0:59] <jantje> sagewk: oh, great
[0:59] <jantje> sagewk: i think i found the issue, or came closer to it
[0:59] <sagewk> it's probably a bug when a journal isn't configured.. we should fix that so it doesn't segfault
[0:59] <cmccabe> sagewk: I guess we should probably exit instead of just printing "no journal" :)
[0:59] <sagewk> but even so, he should use a journal or performance will be bad.
[0:59] <jantje> sagewk: were you able to read your backlog from yesterday?
[1:00] <sagewk> some of it.
[1:00] <jantje> it basicly comes down to this: when the directory size is larger > 4GB, i get that issue
[1:00] <sagewk> last i remember you were getting an error but weren't able to pinpoint where it was coming from? (no strace?)
[1:00] <jantje> the strace is vague
[1:00] <sagewk> oh, the directory size. i see.
[1:01] <sagewk> you mean the i_size on the directory.
[1:01] <jantje> the one reported on a ls -al
[1:01] <jantje> don't know which one that is :-)
[1:02] <sagewk> you can mount with '-o norbytes' and it won't be big
[1:02] <sagewk> which syscall is having trouble with it?
[1:02] <sagewk> or is it an app?
[1:02] <NoahWatkins> cmccabe: good call. i was using a recycled, old ceph.conf that didn't have the journal configured. things are back to normal now :) thx
[1:02] <jantje> i ran cc -I/path/to/dir
[1:03] <sagewk> i guess my question is is a stat call returning an unexpected error code, or is cc choking because the size is bigger than it expects?
[1:03] <sagewk> can you tell from the actual error it reports?
[1:04] <Tv|work> sagewk: what branch do you guys really work on? i have a commit that adds a unit test framework, what should i base it on?
[1:04] <cmccabe> noah: should still file a bug to make the no-journal case work though
[1:04] <jantje> let me open my vpn session to work, just a sec
[1:04] <cmccabe> noah: we support it at least in theory :P
[1:04] <NoahWatkins> cmccabe: on it. thanks for the quick turn around
[1:04] <cmccabe> noah: np, glad it works now
[1:04] <sagewk> if stat is reporting an error code, it's a bug we need to fix. if cc doesn't like a big dir size, it's probably something you shoudl just work around with -o norbytes
[1:05] <sagewk> cmccabe: can you fix the null deref for the no journal case?
[1:05] <cmccabe> sagewk: k
[1:05] <sagewk> thanks
[1:05] <cmccabe> sagewk: my 'create a ton of pools' test seems to have stalled as well
[1:06] <cmccabe> sagewk: I think there's a lurking bug related to having lots and lots of pools
[1:06] <cmccabe> sagewk: anyway, will check the journal thing first.
[1:10] * ajnelson (~Adium@soenat3.cse.ucsc.edu) has joined #ceph
[1:14] <jantje> sagewk: i did a stat64 on the directory and it was fine
[1:15] <jantje> i'll give the mount option a try
[1:16] <jantje> somehow my cluster looks dead
[1:16] <jantje> and ceph -w reports everything up
[1:16] <jantje> wicked
[1:16] <cmccabe> jantje: are all osds dead?
[1:18] <jantje> processes are running
[1:18] <jantje> some of them are ~3% memory usage
[1:18] <jantje> and some are way up to 10-15%
[1:19] <jantje> lets see what they're doing
[1:19] <sagewk> jantje: is stat64 what cc is doing? what error message does it report?
[1:21] <jantje> the error cc reported was 'value too large for defined data type'
[1:21] <jantje> i can't run it right now, it's down
[1:22] <jantje> i did put a strace online, but it's expired
[1:22] <jantje> cmccabe: some osd process is doing: futex(0xea51cc, FUTEX_WAIT_PRIVATE, 1, NULL
[1:23] <jantje> and waiting...
[1:23] <cmccabe> jantje: you could attach with gdb and do "thread apply bt all"
[1:23] <cmccabe> er, thread apply all bt
[1:24] <cmccabe> jantje: probably best to check for something in the logs first if possible
[1:25] <jantje> the only thing i see are some piping errors
[1:25] <cmccabe> jantje: as in , messenger?
[1:26] <jantje> 2011-01-07 14:41:04.568458 7f3e47553710 -- 138.203.10.98:6801/2640 send_message dropped message osd_op_reply(675071 10000008ffd.00000015 [write 4071424~4096 [2@0]] ondisk = 0) v1because of no pipe
[1:27] <jantje> 2011-01-07 14:42:06.622706 7f3e43738710 -- 138.203.10.98:6801/2640 >> 138.203.10.101:0/2249980288 pipe(0x436e780 sd=24 pgs=0 cs=0 l=0).accept peer addr is really 138.203.10.101:0/2249980288 (socket is 138.203.10.101:42817/0)
[1:27] <jantje> I was running an iozone random read/write workload
[1:29] <jantje> cmccabe: lots of threads, but they all look the same
[1:29] <jantje> Thread 26 (Thread 0x7f3e46551710 (LWP 2819)):
[1:29] <jantje> #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:162
[1:29] <jantje> #1 0x00000000005db925 in Cond::Wait (this=0x23c75e0) at common/Cond.h:46
[1:29] <jantje> #2 ThreadPool::worker (this=0x23c75e0) at common/WorkQueue.cc:59
[1:29] <jantje> #3 0x0000000000506b1d in ThreadPool::WorkThread::entry() ()
[1:29] <jantje> #4 0x000000000047941a in Thread::_entry_func (arg=0x23c7634) at ./common/Thread.h:39
[1:29] <jantje> #5 0x00007f3e52cbb8ba in start_thread (arg=<value optimized out>) at pthread_create.c:300
[1:29] <jantje> #6 0x00007f3e5195002d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:112
[1:29] <jantje> #7 0x0000000000000000 in ?? ()
[1:31] <jantje> need something else? i'll set my debug levels higher, any recommendations?
[1:32] <sagewk> what version?
[1:32] <jantje> .24
[1:32] <sagewk> try latest testing branch
[1:32] <sagewk> there were a number of fixed bugs
[1:34] * verwilst (~verwilst@dD576FAAE.access.telenet.be) Quit (Quit: Ex-Chat)
[1:35] <jantje> cmccabe: not only osd processes are waiting for the futex, cmds and cmon as well
[1:35] <cmccabe> jantje: that's normal and expected
[1:36] <jantje> k
[1:36] <cmccabe> jantje: futexes are used to implement mutexes and condition variables
[1:40] <jantje> sagewk: oh, i forgot, http://paranoid.nl/~jan/sage.txt
[1:41] <jantje> but I guess you're rigt about cc not being able to handle large dirsizes
[1:41] <sagewk> i see
[1:41] <sagewk> yeah
[1:42] <sagewk> can you mount with -o norbytes and see if that fixes it
[1:42] <sagewk> ?
[1:42] <sagewk> if so, we can make that default to off on 32-bit archs
[1:42] <sagewk> (the dir size will be the number of files instead of nubmer of recursive bytes)
[1:43] <jantje> or just stick with 4.0k ? :)
[1:43] <jantje> i'm currently recompiling, i'll let you know in some minutes
[1:44] <jantje> and can you make mkcephfs work without -k
[1:44] <jantje> ?
[1:45] <jantje> i'm not using any key stuff
[1:45] <jantje> but mkcephfs requires it
[1:47] <sagewk> yeah
[1:48] <sagewk> we can just make it write to /etc/ceph/keyring.bin by default so you don't have to include it on the commandline
[1:48] <sagewk> we generate the keys, tho, even if it's off
[1:48] <jantje> oh, ok
[1:49] <sagewk> making them specify is just a safety thing so they know where it is, don't clobber, whatever
[1:49] <sagewk> maybe it should only require it or throw a warning if it exists or something
[1:53] * NoahWatkins (~NoahWatki@soenat3.cse.ucsc.edu) Quit (Remote host closed the connection)
[2:02] <jantje> sagewk: no, it does not solve it
[2:02] <jantje> (ls still reports lare size on the directory)
[2:02] <jantje> drwxr-xr-x 1 root root 4.0G Jan 7 20:00 .
[2:02] <jantje> 138.203.10.98,138.203.10.99,138.203.10.100:/ on /mnt/ceph type ceph (rw,norbytes)
[2:05] <jantje> i really have to go now
[2:05] <jantje> sorry :(
[2:07] * gnp421_ (~hutchint@c-75-71-83-44.hsd1.co.comcast.net) has joined #ceph
[2:08] <cmccabe> jantje: ah, too bad the norbytes thing didn't work
[2:08] <cmccabe> jantje: have a good weekend, we'll dig into it next week I guess
[2:09] <sagewk> jantje: can you post an strace with norbytes at some point>
[2:09] <sagewk> thanks
[2:13] * gnp421 (~hutchint@c-75-71-83-44.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[2:14] * gnp421_ is now known as gnp421
[2:21] * cmccabe (~cmccabe@208.80.64.200) Quit (Remote host closed the connection)
[2:23] * ajnelson (~Adium@soenat3.cse.ucsc.edu) Quit (Read error: Operation timed out)
[2:28] * ajnelson (~Adium@dhcp-63-189.cse.ucsc.edu) has joined #ceph
[2:30] * ken_barber (~kbarber@93-97-221-206.zone5.bethere.co.uk) Quit (Ping timeout: 480 seconds)
[2:30] * ken_barber (~kbarber@94-194-180-108.zone8.bethere.co.uk) has joined #ceph
[2:46] * Tv|work (~Tv|work@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[2:52] * gnp421 (~hutchint@c-75-71-83-44.hsd1.co.comcast.net) Quit (Ping timeout: 480 seconds)
[3:17] * ajnelson (~Adium@dhcp-63-189.cse.ucsc.edu) Quit (Ping timeout: 480 seconds)
[3:19] * sjust (~sam@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[3:24] * bchrisman (~Adium@70-35-37-146.static.wiline.com) Quit (Quit: Leaving.)
[3:35] * ken_barber (~kbarber@94-194-180-108.zone8.bethere.co.uk) Quit (Remote host closed the connection)
[3:35] * ken_barber (~kbarber@94-194-180-108.zone8.bethere.co.uk) has joined #ceph
[3:41] * ken_barber (~kbarber@94-194-180-108.zone8.bethere.co.uk) Quit (Remote host closed the connection)
[4:07] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) Quit (Ping timeout: 480 seconds)
[4:16] * tjikkun (~tjikkun@195-240-122-237.ip.telfort.nl) has joined #ceph
[4:38] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) has joined #ceph
[5:47] * joshd (~jdurgin@adsl-75-28-69-238.dsl.irvnca.sbcglobal.net) has joined #ceph
[6:34] * ijuz_ (~ijuz@p4FFF63D3.dip.t-dialin.net) Quit (Ping timeout: 480 seconds)
[6:34] * gnp421 (~hutchint@c-75-71-83-44.hsd1.co.comcast.net) has joined #ceph
[6:43] * ijuz_ (~ijuz@p4FFF77C2.dip.t-dialin.net) has joined #ceph
[7:01] * bchrisman1 (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) has joined #ceph
[7:01] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[7:19] * gnp421 (~hutchint@c-75-71-83-44.hsd1.co.comcast.net) Quit (Quit: Leaving)
[7:42] * tjikkun (~tjikkun@195-240-122-237.ip.telfort.nl) Quit (Ping timeout: 480 seconds)
[7:42] * joshd (~jdurgin@adsl-75-28-69-238.dsl.irvnca.sbcglobal.net) Quit (Quit: Leaving.)
[7:52] * yehudasa (~yehudasa@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[7:52] * sagewk (~sage@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[7:53] * yehudasa (~yehudasa@ip-66-33-206-8.dreamhost.com) has joined #ceph
[7:54] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) Quit (Ping timeout: 480 seconds)
[7:57] * sagewk (~sage@ip-66-33-206-8.dreamhost.com) has joined #ceph
[7:58] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[8:09] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) has joined #ceph
[10:17] * allsystemsarego (~allsystem@188.27.167.49) has joined #ceph
[11:25] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[11:32] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (Quit: Yoric)
[11:38] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[12:43] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) Quit (Quit: WeeChat 0.2.6)
[12:47] * s15y (~s15y@sac91-2-88-163-166-69.fbx.proxad.net) has joined #ceph
[13:43] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) has joined #ceph
[13:43] * bchrisman1 (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) Quit (Read error: Connection reset by peer)
[15:06] * allsystemsarego (~allsystem@188.27.167.49) Quit (Quit: Leaving)
[16:15] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) Quit (Ping timeout: 480 seconds)
[16:18] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) has joined #ceph
[16:23] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) Quit (Ping timeout: 480 seconds)
[16:23] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) has joined #ceph
[16:50] * ken_barber (~ken_barbe@94-194-180-108.zone8.bethere.co.uk) has joined #ceph
[17:29] * ken_barber (~ken_barbe@94-194-180-108.zone8.bethere.co.uk) Quit (Quit: Leaving)
[17:45] * eternaleye_ (~eternaley@195.215.30.181) has joined #ceph
[17:45] * eternaleye (~eternaley@195.215.30.181) Quit (Read error: Connection reset by peer)
[17:46] * greglap (~Adium@166.205.138.19) has joined #ceph
[17:52] * alexxy (~alexxy@79.173.81.171) Quit (charon.oftc.net solenoid.oftc.net)
[17:52] * stingray (~stingray@stingr.net) Quit (charon.oftc.net solenoid.oftc.net)
[17:52] * Mark23 (~mark@195.184.64.194) Quit (charon.oftc.net solenoid.oftc.net)
[17:58] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[17:58] * stingray (~stingray@stingr.net) has joined #ceph
[17:58] * Mark23 (~mark@195.184.64.194) has joined #ceph
[18:34] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * Mark23 (~mark@195.184.64.194) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * stingray (~stingray@stingr.net) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * alexxy (~alexxy@79.173.81.171) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * `gregorg` (~Greg@78.155.152.6) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * hijacker (~hijacker@213.91.163.5) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * sunech (~felix@217.195.176.49) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * jantje (~jan@paranoid.nl) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * andret (~andre@pcandre.nine.ch) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * ElectricBill (~bill@smtpv2.cosi.net) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * MK_FG (~MK_FG@188.226.51.71) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * pruby (~tim@leibniz.catalyst.net.nz) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * greglap (~Adium@166.205.138.19) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * eternaleye_ (~eternaley@195.215.30.181) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * __jt__ (~james@jamestaylor.org) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * Meths (rift@91.106.217.147) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * darkfader (~floh@host-93-104-226-28.customer.m-online.net) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * atg (~atg@please.dont.hacktheinter.net) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * yehudasa (~yehudasa@ip-66-33-206-8.dreamhost.com) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * johnl_ (~johnl@109.107.34.14) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * monrad-51468 (~mmk@domitian.tdx.dk) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * ijuz_ (~ijuz@p4FFF77C2.dip.t-dialin.net) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * zoobab (zoobab@vic.ffii.org) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * Anticimex (anticimex@netforce.csbnet.se) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * Guest3437 (quasselcor@bas11-montreal02-1128535712.dsl.bell.ca) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * cclien (~cclien@ec2-175-41-146-71.ap-southeast-1.compute.amazonaws.com) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * michael-ndn (~michael-n@12.248.40.138) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * nolan (~nolan@phong.sigbus.net) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * sagewk (~sage@ip-66-33-206-8.dreamhost.com) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * Jiaju (~jjzhang@222.126.194.154) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * DeHackEd (~dehacked@dhe.execulink.com) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * iggy (~iggy@theiggy.com) Quit (kinetic.oftc.net charon.oftc.net)
[18:34] * sage (~sage@dsl092-035-022.lax1.dsl.speakeasy.net) Quit (kinetic.oftc.net charon.oftc.net)
[18:38] * Mark23 (~mark@195.184.64.194) has joined #ceph
[18:38] * stingray (~stingray@stingr.net) has joined #ceph
[18:38] * alexxy (~alexxy@79.173.81.171) has joined #ceph
[18:38] * greglap (~Adium@166.205.138.19) has joined #ceph
[18:38] * eternaleye_ (~eternaley@195.215.30.181) has joined #ceph
[18:38] * bchrisman (~Adium@c-24-130-226-22.hsd1.ca.comcast.net) has joined #ceph
[18:38] * tjikkun (~tjikkun@2001:7b8:356:0:204:bff:fe80:8080) has joined #ceph
[18:38] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) has joined #ceph
[18:38] * gregaf (~Adium@ip-66-33-206-8.dreamhost.com) has joined #ceph
[18:38] * sagewk (~sage@ip-66-33-206-8.dreamhost.com) has joined #ceph
[18:38] * yehudasa (~yehudasa@ip-66-33-206-8.dreamhost.com) has joined #ceph
[18:38] * ijuz_ (~ijuz@p4FFF77C2.dip.t-dialin.net) has joined #ceph
[18:38] * `gregorg` (~Greg@78.155.152.6) has joined #ceph
[18:38] * ElectricBill (~bill@smtpv2.cosi.net) has joined #ceph
[18:38] * zoobab (zoobab@vic.ffii.org) has joined #ceph
[18:38] * MK_FG (~MK_FG@188.226.51.71) has joined #ceph
[18:38] * jeffhung (~jeffhung@60-250-103-120.HINET-IP.hinet.net) has joined #ceph
[18:38] * pruby (~tim@leibniz.catalyst.net.nz) has joined #ceph
[18:38] * Guest3437 (quasselcor@bas11-montreal02-1128535712.dsl.bell.ca) has joined #ceph
[18:38] * Anticimex (anticimex@netforce.csbnet.se) has joined #ceph
[18:38] * hijacker (~hijacker@213.91.163.5) has joined #ceph
[18:38] * johnl_ (~johnl@109.107.34.14) has joined #ceph
[18:38] * sunech (~felix@217.195.176.49) has joined #ceph
[18:38] * Meths (rift@91.106.217.147) has joined #ceph
[18:38] * darkfader (~floh@host-93-104-226-28.customer.m-online.net) has joined #ceph
[18:38] * Jiaju (~jjzhang@222.126.194.154) has joined #ceph
[18:38] * DeHackEd (~dehacked@dhe.execulink.com) has joined #ceph
[18:38] * monrad-51468 (~mmk@domitian.tdx.dk) has joined #ceph
[18:38] * cclien (~cclien@ec2-175-41-146-71.ap-southeast-1.compute.amazonaws.com) has joined #ceph
[18:38] * __jt__ (~james@jamestaylor.org) has joined #ceph
[18:38] * jantje (~jan@paranoid.nl) has joined #ceph
[18:38] * atg (~atg@please.dont.hacktheinter.net) has joined #ceph
[18:38] * andret (~andre@pcandre.nine.ch) has joined #ceph
[18:38] * michael-ndn (~michael-n@12.248.40.138) has joined #ceph
[18:38] * nolan (~nolan@phong.sigbus.net) has joined #ceph
[18:38] * sage (~sage@dsl092-035-022.lax1.dsl.speakeasy.net) has joined #ceph
[18:38] * iggy (~iggy@theiggy.com) has joined #ceph
[18:41] * allsystemsarego (~allsystem@188.27.167.49) has joined #ceph
[18:56] * greglap (~Adium@166.205.138.19) Quit (Quit: Leaving.)
[18:59] * helix_ (~helix@jem75-2-82-233-232-223.fbx.proxad.net) has joined #ceph
[19:13] * Yoric (~David@dau94-10-88-189-211-192.fbx.proxad.net) Quit (Quit: Yoric)
[19:24] * helix_ (~helix@jem75-2-82-233-232-223.fbx.proxad.net) Quit (Remote host closed the connection)
[19:32] * helix_ (~helix@jem75-2-82-233-232-223.fbx.proxad.net) has joined #ceph
[19:51] * ajnelson (~Adium@dhcp-225-235.cruznetsecure.ucsc.edu) has joined #ceph
[20:38] * ajnelson (~Adium@dhcp-225-235.cruznetsecure.ucsc.edu) Quit (Quit: Leaving.)
[21:25] * ajnelson (~Adium@dhcp-225-235.cruznetsecure.ucsc.edu) has joined #ceph
[22:00] * ajnelson (~Adium@dhcp-225-235.cruznetsecure.ucsc.edu) Quit (Quit: Leaving.)
[22:45] * ajnelson (~Adium@adsl-99-139-49-113.dsl.pltn13.sbcglobal.net) has joined #ceph
[23:03] * ajnelson (~Adium@adsl-99-139-49-113.dsl.pltn13.sbcglobal.net) Quit (Quit: Leaving.)
[23:54] * allsystemsarego (~allsystem@188.27.167.49) Quit (Quit: Leaving)

These logs were automatically created by CephLogBot on irc.oftc.net using the Java IRC LogBot.