Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-26

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 nishanth joined #gluster
00:26 sputnik13 joined #gluster
00:35 calisto joined #gluster
00:40 julim joined #gluster
00:48 cleo_ joined #gluster
00:57 RicardoSSP joined #gluster
01:09 calum_ joined #gluster
01:10 plarsen joined #gluster
01:14 cleo_ hi do you have any easy references about Elastic Hash Algorithm?
01:14 bala joined #gluster
01:14 cleo_ its bit hard to understand EHA
01:18 JoeJulian @lucky dht misses are expensive
01:18 glusterbot JoeJulian: http://joejulian.name/blog​/dht-misses-are-expensive/
01:18 JoeJulian cleo_: That link. ^
01:22 gildub joined #gluster
01:22 gildub joined #gluster
01:25 cleo_ Thanks a lot!!
01:27 lyang0 joined #gluster
01:35 topshare joined #gluster
01:53 harish_ joined #gluster
02:01 haomaiwa_ joined #gluster
02:08 kshlm joined #gluster
02:13 bharata-rao joined #gluster
02:15 soumya joined #gluster
02:18 kaushal_ joined #gluster
02:20 calum_ joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/ | GlusterFS 3.6 test week - https://public.pad.fsfe.or​g/p/GlusterFS-3.6-test-doc
02:52 sputnik13 joined #gluster
02:59 meghanam_ joined #gluster
02:59 meghanam joined #gluster
03:12 soumya joined #gluster
03:26 kdhananjay joined #gluster
03:37 gildub joined #gluster
03:52 rjoseph joined #gluster
03:53 kshlm joined #gluster
04:00 ndarshan joined #gluster
04:00 saurabh joined #gluster
04:08 RameshN joined #gluster
04:08 itisravi joined #gluster
04:14 shubhendu joined #gluster
04:22 glusterbot News from newglusterbugs: [Bug 1168080] All the bricks on one of the node goes offline and doesn't comes back up when one of the node is shutdown and the other node is rebooted in 2X2 gluster volume. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1168080>
04:23 anoopcs joined #gluster
04:28 ArminderS joined #gluster
04:33 jiffin joined #gluster
04:35 anoopcs joined #gluster
04:40 anoopcs joined #gluster
04:41 rafi1 joined #gluster
04:44 EinstCrazy joined #gluster
04:47 shubhendu joined #gluster
04:51 nbalachandran joined #gluster
04:51 atinmu joined #gluster
05:02 meghanam joined #gluster
05:02 meghanam_ joined #gluster
05:06 smohan joined #gluster
05:06 meghanam joined #gluster
05:06 meghanam_ joined #gluster
05:07 kumar joined #gluster
05:07 lalatenduM joined #gluster
05:08 ArminderS can anyone point me to good guide to tweak gluster peformance
05:08 ArminderS got thousands of small sized files
05:09 ArminderS recursive chmod takes ages to complete
05:10 vimal joined #gluster
05:11 bjornar joined #gluster
05:16 spandit joined #gluster
05:18 haomaiwa_ joined #gluster
05:22 shubhendu joined #gluster
05:25 deepakcs joined #gluster
05:26 EinstCrazy hello
05:26 glusterbot EinstCrazy: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
05:30 bala joined #gluster
05:31 haomai___ joined #gluster
05:34 jiffin joined #gluster
05:34 hagarth joined #gluster
05:37 jiffin1 joined #gluster
05:40 msmith joined #gluster
05:41 dusmant joined #gluster
05:42 soumya joined #gluster
05:43 zerick joined #gluster
05:48 kotresh_ joined #gluster
05:49 ramteid joined #gluster
05:54 glusterbot News from resolvedglusterbugs: [Bug 1168080] All the bricks on one of the node goes offline and doesn't comes back up when one of the node is shutdown and the other node is rebooted in 2X2 gluster volume. <https://bugzilla.redhat.co​m/show_bug.cgi?id=1168080>
06:00 kdhananjay joined #gluster
06:01 pp joined #gluster
06:02 epequeno joined #gluster
06:03 overclk joined #gluster
06:10 ppai joined #gluster
06:18 bala joined #gluster
06:32 nishanth joined #gluster
06:37 JoeJulian ArminderS: Steps taken by chmod -R: getdents() = readdir or readdirplus depending on kernel version. for each file in that directory, fstat() = hash the filename. lookup() the file on the dht subvolume mapped to that hash for the file -> lookup() each replica in that dht subvolume -> check the extended attributes for pending changes -> there are none -> return result. chmod() -> repeat from lookup() to "there are none" -> increment pending meta
06:37 JoeJulian data change attribute on all replica for this dht subvolume, make the change, decrement pending metadata change attributes. Repeat for each file. So now the question is, is there any way to make all that faster?
06:38 ArminderS JoeJulian: very nice of you to break the whole process
06:39 ArminderS but aye, the question remains -> is there any way to make all that faster?
06:40 soumya joined #gluster
06:40 JoeJulian Yes, lower latency connections. rdma, use libgfapi to avoid context switches. To speed up context switches, faster cpus and ram.
06:41 JoeJulian But there's no way to make the process any faster except to avoid doing it.
06:42 JoeJulian If you don't care about self-heal, you can disable that. That would mean that it's possible for a client to receive stale or invalid data.
06:42 ArminderS any ballpark figures for cpu and ram
06:42 JoeJulian The self-heal daemon would still run to it would be eventually consistent.
06:42 JoeJulian How long is a string.
06:43 JoeJulian I heard that earlier today and loved it... :D
06:43 ArminderS haha
06:43 JoeJulian A better way of engineering is to determine your needs first, then design a system that meets them.
06:44 ArminderS aye, i do agree
06:44 nshaikh joined #gluster
06:44 JoeJulian If you're anything like me, the other way around doesn't work. I don't care how fast a computer is, it still takes too long to boot.
06:44 ArminderS didn't had those small files in initial thoughts
06:45 JoeJulian Are you doing recursive operations on those files frequently?
06:45 ArminderS and those run 5-levels deep
06:45 ArminderS the devs got the ansible scripts that does that on each deployment
06:46 ArminderS need to ask him why he needs that
06:46 Philambdo joined #gluster
06:46 JoeJulian Recursion requires stat calls for every filename to determine if it's a directory. If you just "echo *" in one of those directories, you'll notice it's actually pretty quick.
06:47 JoeJulian If you can avoid a recursion, you can avoid a lot of self-heal checks.
06:48 JoeJulian Does that make sense they way I said it?
06:49 ArminderS aye
06:50 dusmant joined #gluster
06:50 JoeJulian Good, 'cause I'm getting tired and not making sense to myself.
06:50 ArminderS :D
07:00 ppai joined #gluster
07:10 ArminderS joined #gluster
07:20 rjoseph joined #gluster
07:23 kovshenin joined #gluster
07:24 m0ellemeister joined #gluster
07:28 azar joined #gluster
07:33 rgustafs joined #gluster
07:35 Debloper joined #gluster
07:37 raghu` joined #gluster
07:43 LebedevRI joined #gluster
07:45 ctria joined #gluster
07:47 sputnik13 joined #gluster
07:53 rjoseph joined #gluster
07:56 dusmant joined #gluster
07:59 m0ellemeister i have a GlusterFS Cluster (v3.3) running on two RHEL 6.3 nodes
07:59 m0ellemeister when try to update by executing yum update this happens: http://pastebin.com/nNCYisLN
07:59 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
08:02 m0ellemeister yum update will do update GlusterFS to version 3.6
08:02 m0ellemeister which does not work at all
08:05 m0ellemeister my guess is to work around this issue is to uninstall GlusterFS before executing yum update
08:06 m0ellemeister after the updates have been applied successfully I would try to re-install GlusterFS v3.6
08:06 ndevos m0ellemeister: this should work for you too: http://blog.gluster.org/2014/11/installing-gluste​rfs-3-4-x-3-5-x-or-3-6-0-on-rhel-or-centos-6-6-2/
08:08 ndevos m0ellemeister: on rhel-6.3, the .repo files for RHN could be in a different directory, possibly /etc/yum/pluginconf.d/
08:08 m0ellemeister is it possible to keep everything under /var/lib/glusterd or do I have to re-create the whole setup?
08:09 m0ellemeister ndevos, thx I'll have close look on that
08:09 ppai joined #gluster
08:10 ndevos m0ellemeister: with that, you should be able to stick to 3.3.x - but note that 3.3 will not get any updates, 3.4 is the 'oldest' supported stable version
08:11 ndevos m0ellemeister: oh, and on rhel-6.3 (using plain RHN, not subscription-manager) you would set the exclude= option in /etc/yum/pluginconf.d/rhnplugin.conf
08:12 uebera|| joined #gluster
08:14 [Enrico] joined #gluster
08:16 hagarth @channelstats
08:16 glusterbot hagarth: On #gluster there have been 390402 messages, containing 15047707 characters, 2473727 words, 8893 smileys, and 1251 frowns; 1770 of those messages were ACTIONs.  There have been 177325 joins, 4472 parts, 173217 quits, 29 kicks, 2276 mode changes, and 8 topic changes.  There are currently 254 users and the channel has peaked at 254 users.
08:16 hagarth at the peak now :)
08:17 m0ellemeister do I understand that aricle right? with rhel-6.6 there are no GlusterFS packages available (by RHN repos) to run a Server?
08:17 ndevos m0ellemeister: yes, that is correct, if you want the glusterfs-server package from Red Hat, you need to buy Red Hat Storage Server
08:18 m0ellemeister ah, ok that's the hook ;-)
08:18 ndevos m0ellemeister: Red Hat added glusterfs-fuse and other client parts to their main RHEL channels, and that makes it difficult to install our community packages
08:19 * m0ellemeister could cry ;-(
08:20 ndevos you're not the only one :-/
08:22 m0ellemeister ndevos, many thanks for your help, now I have glue how to get this going but it feels like it won't be a lot of fun
08:22 Debloper joined #gluster
08:23 ndevos m0ellemeister: yeah, it isnt very fun to do, but once done, you should not have any issues with it in the (at least near) future
08:24 elico joined #gluster
08:24 glusterbot News from resolvedglusterbugs: [Bug 789278] Issues reported by Coverity static analysis tool <https://bugzilla.redhat.com/show_bug.cgi?id=789278>
08:27 fsimonce joined #gluster
08:28 m0ellemeister ndevos, I hope so. No I can make up a plan for the update, again many thx
08:30 T0aD joined #gluster
08:34 ppai joined #gluster
08:45 deepakcs @paste
08:45 glusterbot deepakcs: For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
08:48 anil joined #gluster
08:50 dusmant joined #gluster
08:53 shubhendu joined #gluster
08:58 topshare joined #gluster
09:00 NigeyS joined #gluster
09:00 NigeyS morning :)
09:01 NigeyS is there any reason why gluster would stop ALL of it's logging? seems the log files were created at 6:36am but they are all 0bytes .. and they should have something in them as i've been running siege on the webservers this morning..
09:02 anti[Enrico] joined #gluster
09:03 Debloper joined #gluster
09:05 rjoseph joined #gluster
09:06 hagarth NigeyS: did a logrotate create those 0 byte files?
09:06 atalur joined #gluster
09:06 NigeyS it did yup, and after which gluster just stopped logging completely
09:08 hagarth NigeyS: maybe it did not find anything to log? :)
09:08 hagarth NigeyS: is your deployment functionally ok?
09:09 NigeyS i thought about that, but i ran siege on 2 of the webservers, i would have expected "something" especially as 1 of them him a memory timeout
09:09 NigeyS it was fine until the mem timeout yup..
09:09 coredumb so continuing my git on glusterfs testing
09:09 coredumb pushing a full kernel tree to an empty repo
09:09 coredumb there's HUGE performance loss there :O
09:10 hagarth NigeyS: maybe you can do a strace -f -p <glusterfsd-pid> to see if its behavior is normal
09:10 hagarth coredumb: how bad is it? 10x, 20x or more? :)
09:11 NigeyS okies, will do.
09:11 coredumb hagarth: 12MB/s on FS
09:11 shubhendu joined #gluster
09:11 coredumb 400KB/s on gluster :O
09:11 deniszh joined #gluster
09:12 NigeyS i just got some crazy speed results to, servering a webpage, was 8 - 18 seconds over the gluster fs
09:12 coredumb my test yesterday on small repositories weren't that bad
09:12 ndevos ~php | NigeyS
09:12 glusterbot NigeyS: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizi​ng-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH
09:12 glusterbot --negative-timeout=HIGH --fopen-keep-cache
09:12 coredumb but i hadn't test initial big commits
09:13 coredumb Writing objects: 100% (3455688/3455688), 616.40 MiB | 12.04 MiB/s, done.
09:13 NigeyS ndevos yup, going to be doing some php tweaking today, i ran siege before just to geta baseline to compare with.
09:13 coredumb on FS
09:13 coredumb Writing objects:   9% (328373/3455688), 128.24 MiB | 335 KiB/s
09:13 hagarth coredumb: can you try profiling git push with strace -Tc to see the calls that suffer more latency?
09:13 coredumb on gluster
09:14 ArminderS if we convert 3 brick, 3 replication volume to 4 brick, 2 replication one, will the performance get any better?
09:16 hagarth ArminderS: since replication is completely synchronous, write performance does get better with fewer number of replicas
09:17 coredumb hagarth: strace doesn't return anything
09:17 ArminderS this will mean 50% on each of 4 nodes, right
09:17 liquidat joined #gluster
09:19 hagarth coredumb: strace -Tcf git push ?
09:20 NigeyS hagarth get nothing in strace when siege is running other than egt time of day messages..
09:20 NigeyS get*
09:20 coredumb hagarth: nope i attached to the process already running
09:20 coredumb takes 5mn just to count objects lol
09:21 hagarth coredumb: try keeping it for longer :)
09:22 hagarth NigeyS: that looks odd, does siege perform both reads and writes?
09:22 hagarth ArminderS: depends on files being updated
09:22 NigeyS just reads, its literally just simulating 15 users hitting index.php
09:23 hagarth NigeyS: wonder if reads are being served off a client side cache
09:23 NigeyS ill strace the fuse client on the webserver maybe ?
09:23 hagarth NigeyS: yes, that would give a better picture
09:24 bjornar joined #gluster
09:26 coredumb hagarth: now i don't think i'll often have a use case where i should push a kernel tree :D
09:26 coredumb and i could push it on disk then rsync to the gluster if needed
09:26 coredumb but well
09:27 coredumb i need this git master/master setup to be as low overhead as possible
09:27 NigeyS hagarth can i pipe strace to a file? its moving rather quick ..lol
09:27 hagarth NigeyS: use -o <output-file> with strace
09:28 hagarth coredumb: yes, improving git performance on gluster is on the top of my list for 3.7
09:28 NigeyS thanks
09:28 * hagarth needs to run now.. ttyl folks
09:29 NigeyS ttyl, and thanks :)
09:29 coredumb hagarth: oh nice to hear :)
09:30 dusmant joined #gluster
09:37 Fen2 joined #gluster
09:39 vimal joined #gluster
09:39 Fen2 Hi ! :) Can we install Red Hat Storage Console on CentOS 7 ?
09:41 coredumb well you should ask red hat maybe ?
09:42 NigeyS Fen2 is that the same as SSM ?
09:44 Fen2 SSM is not a GUI, no ?
09:45 NigeyS no, guess its not the same thing
09:45 Fen2 I would manage glusterFS without ovirt
09:45 Slashman joined #gluster
09:46 Fen2 and i found Red Hat Storage Console which is pretty nice :)
09:46 Fen2 but we are on CentOS 7 and not on Red Hat
09:57 corretico joined #gluster
10:03 ArminderS trying to reduce replica from 3 to 2, gave command -> gluster volume remove-brick myvol replica 2 gluster04:/export/sdb1/myvol start
10:03 ArminderS volume remove-brick start: failed: One or more nodes do not support the required op-version. Cluster op-version must atleast be 30600.
10:03 glusterbot ArminderS: set the desired op-version using ''gluster volume set all cluster.op-version $desired_op_version''.
10:04 ArminderS i'm running 3.6.1-1.el6.x86_64 on all
10:04 ArminderS what shall be the $desired_op_version in this case?
10:12 kdhananjay joined #gluster
10:12 ArminderS- joined #gluster
10:18 rjoseph joined #gluster
10:20 dusmant joined #gluster
10:23 glusterbot News from newglusterbugs: [Bug 1168167] Change licensing of disperse to dual LGPLv3/GPLv2 <https://bugzilla.redhat.co​m/show_bug.cgi?id=1168167>
10:26 ghenry joined #gluster
10:37 SOLDIERz joined #gluster
10:43 rjoseph joined #gluster
10:52 kdhananjay joined #gluster
11:01 warci joined #gluster
11:10 diegows joined #gluster
11:12 ppai joined #gluster
11:13 gothos left #gluster
11:16 rolfb joined #gluster
11:20 marcus joined #gluster
11:21 marcus hi all. i got a two node replicated system, but when one node boots up and the other node is unavailable, gluster does not even start on that single node. so is it required for both servers to be available during startup?
11:29 haomaiwa_ joined #gluster
11:47 diegows joined #gluster
11:48 DV joined #gluster
11:49 meghanam joined #gluster
11:49 meghanam_ joined #gluster
11:49 soumya_ joined #gluster
11:51 kovshenin joined #gluster
11:52 kanagaraj joined #gluster
12:00 SOLDIERz joined #gluster
12:04 rjoseph joined #gluster
12:12 DV joined #gluster
12:15 ArminderS joined #gluster
12:16 ppai joined #gluster
12:20 bala joined #gluster
12:24 SOLDIERz joined #gluster
12:27 itisravi marcus: This is glusterd's behaviour by design. See http://supercolony.gluster.org/pipermail​/gluster-users/2014-November/019660.html
12:29 itisravi_ joined #gluster
12:29 atalur joined #gluster
12:29 RameshN joined #gluster
12:45 shubhendu joined #gluster
12:48 RameshN joined #gluster
12:52 atalur joined #gluster
12:59 rgustafs joined #gluster
13:00 marcus itisravi, thanks for the hint. could this be a topic for the documentation as well?
13:00 Fen1 joined #gluster
13:00 itisravi marcus: well, it could be documented as a known issue maybe.
13:02 marcus you mean e.g. here: http://www.gluster.org/documentation/​howto/Basic_Gluster_Troubleshooting/ ?
13:02 marcus at least that's where i looked first ;)
13:02 itisravi marcus: That would be a good place :)
13:03 marcus am i able to add some notes there on my own?
13:03 itisravi marcus: Humble would be the best person to answer that.
13:04 itisravi Humble: can anyone edit http://www.gluster.org/documentation/​howto/Basic_Gluster_Troubleshooting/ ?
13:04 Slashman joined #gluster
13:05 * itisravi is logging off.
13:06 Humble itisravi, if u have a login , u can
13:06 marcus hmm, can i sign up for a login as well?
13:07 Humble marcus, wait..
13:07 Humble if the documentation is part of media wiki you can login and edit
13:08 Humble but this url is part of website
13:10 Humble marcus, if you already have the note or if you can write about it , I can help you to render it in website..
13:10 vimal joined #gluster
13:11 SOLDIERz joined #gluster
13:16 topshare joined #gluster
13:21 edward1 joined #gluster
13:22 jaymeh joined #gluster
13:23 redback joined #gluster
13:28 marcus we could perhaps use atins statement in a modified form: http://pastebin.com/EDBf3FMi
13:28 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
13:33 marcus Humble, and here is a version on http://fpaste.org/154272/880314/raw/
13:36 bala joined #gluster
13:39 nbalachandran joined #gluster
13:40 rgustafs joined #gluster
13:40 Philambdo joined #gluster
13:41 jaymeh I have been following the documentation in order to setup a share using gluster for Ubuntu 14.01 LTS but it seems that the share isn't working. The folders are appearing on all 3 servers and the volume is started but when adding a file in it won't share across the machines. I tried a manual mount of the folder but the machine I run it on just hangs
13:42 bala joined #gluster
13:45 virusuy joined #gluster
13:45 virusuy joined #gluster
13:45 hagarth joined #gluster
13:47 tdasilva joined #gluster
13:49 julim joined #gluster
13:58 SOLDIERz joined #gluster
13:59 Telsin joined #gluster
14:12 nbalachandran joined #gluster
14:12 dusmant joined #gluster
14:15 Telsin joined #gluster
14:16 B21956 joined #gluster
14:17 fyxim_ joined #gluster
14:18 nishanth joined #gluster
14:22 harish_ joined #gluster
14:22 sage__ joined #gluster
14:26 topshare joined #gluster
14:27 msmith_ joined #gluster
14:29 mator hey
14:30 mator how do I fix it manually:
14:30 mator [2014-11-26 18:25:35.717430] E [dht-common.c:659:dht_lookup_everywhere_done] 0-cdn-vol1-dht: path /lock/1 exists as a file on one subvolume and dire
14:30 mator ctory on another. Please fix it manually
14:40 DV joined #gluster
14:42 SOLDIERz joined #gluster
14:43 msmith_ joined #gluster
14:44 ricky-ticky joined #gluster
14:44 coredump joined #gluster
14:49 nishanth joined #gluster
14:49 Telsin joined #gluster
14:52 failshell joined #gluster
14:55 Telsin joined #gluster
14:56 kanagaraj joined #gluster
15:00 atinmu joined #gluster
15:01 failshell joined #gluster
15:01 Telsin left #gluster
15:05 SOLDIERz joined #gluster
15:05 soumya_ joined #gluster
15:08 Slashman_ joined #gluster
15:14 SOLDIERz joined #gluster
15:15 _Bryan_ joined #gluster
15:15 theron joined #gluster
15:15 wushudoin joined #gluster
15:16 DV joined #gluster
15:16 n-st joined #gluster
15:18 meghanam_ joined #gluster
15:18 meghanam joined #gluster
15:26 smohan_ joined #gluster
15:29 nbalachandran joined #gluster
15:31 bennyturns joined #gluster
15:34 rafi1 joined #gluster
15:36 georgeh-LT2 joined #gluster
15:37 _dist joined #gluster
15:40 SOLDIERz joined #gluster
15:42 rafi1 joined #gluster
15:46 ricky-ticky joined #gluster
15:51 lmickh joined #gluster
15:53 rolfb joined #gluster
15:53 calisto joined #gluster
15:54 Telsin joined #gluster
15:56 anoopcs joined #gluster
15:56 SOLDIERz joined #gluster
15:57 ArminderS joined #gluster
15:57 TealS joined #gluster
15:58 rafi1 joined #gluster
15:59 RameshN joined #gluster
16:00 jmarley joined #gluster
16:07 bala joined #gluster
16:10 rafi1 joined #gluster
16:12 virusuy joined #gluster
16:13 coredump So guys
16:13 coredump I am getting this on my logs:  0-fuse: xlator does not implement release_cbk
16:14 coredump and at the same time I get a write error
16:14 coredump W [client-rpc-fops.c:2071:client3_3_create_cbk] 0-cinder-client-0: remote operation failed: Permission denied. Path: /.cinder-write-test-48209-61J4xM (00000000-0000-0000-0000-000000000000)
16:16 drankis joined #gluster
16:16 plarsen joined #gluster
16:16 jobewan joined #gluster
16:21 djones_ joined #gluster
16:32 mojibake joined #gluster
16:34 RameshN joined #gluster
16:45 djones_ left #gluster
16:45 meghanam_ joined #gluster
16:45 meghanam joined #gluster
16:46 ricky-ticky joined #gluster
16:49 hagarth joined #gluster
16:53 mojibake joined #gluster
16:53 ryao joined #gluster
16:59 ryao joined #gluster
16:59 _dist joined #gluster
17:03 vertex^ joined #gluster
17:06 deniszh1 joined #gluster
17:15 vertex^ Hey guys, I'm seeing stuff like this in my gluster mount's log: [2014-11-26 16:19:08.951683] I [dht-common.c:1822:dht_lookup_cbk] 0-rdo-dht: Entry /rdo/rodeo/setup/lib/python/pure/io.py missing on subvol rdo-replicate-2 <---- Does that mean someone asked if such file existed (and didn't) or does it mean that the file was expected to exist but didn't (possibly) due to missing content in a subvolume?
17:15 glusterbot vertex^: <--'s karma is now -3
17:15 vertex^ lol, does asking questions lower my karma? :p
17:16 vertex^ So like, should I do a gluster heal if I see that kind of log warning?
17:27 coredump So any idea if the "xlator does not implement release_cbk" are related to the permission denied errors I get?
17:32 drankis joined #gluster
17:33 elico joined #gluster
17:45 lalatenduM joined #gluster
18:01 ildefonso joined #gluster
18:13 cmtime joined #gluster
18:16 nshaikh joined #gluster
18:26 coredump joined #gluster
18:36 PeterA joined #gluster
18:41 TealS joined #gluster
18:56 SOLDIERz joined #gluster
19:03 Telsin joined #gluster
19:12 Telsin left #gluster
19:16 TealS joined #gluster
19:27 glusterbot News from resolvedglusterbugs: [Bug 895528] 3.4 Alpha Tracker <https://bugzilla.redhat.com/show_bug.cgi?id=895528>
19:27 glusterbot News from resolvedglusterbugs: [Bug 884597] dht linkfile are created with different owner:group than that source(data) file in few cases <https://bugzilla.redhat.com/show_bug.cgi?id=884597>
19:35 msmith_ joined #gluster
19:35 msmith_ joined #gluster
19:36 msmith_ joined #gluster
19:38 rafi1 joined #gluster
19:42 unwastable joined #gluster
19:44 baoboa joined #gluster
19:44 unwastable i have old files that have dirty flags in AFR changelog attr, but no discrepancies found between the 1x2 replicas. Would it be safe to reset to 0x000... ? please help
19:45 plarsen joined #gluster
19:47 unwastable anyone?
19:54 sage_ joined #gluster
20:00 B21956 joined #gluster
20:02 xavih joined #gluster
20:04 ghenry_ joined #gluster
20:06 KjetilK joined #gluster
20:18 semiosis unwastable: what version of glusterfs?  see ,,(split-brain)
20:18 glusterbot unwastable: (#1) To heal split-brain, use splitmount. http://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/, or (#2) For additional information, see this older article http://joejulian.name/blog/fixin​g-split-brain-with-glusterfs-33/
20:29 plarsen joined #gluster
20:29 plarsen joined #gluster
20:42 ghenry_ joined #gluster
20:42 virusuy joined #gluster
20:47 unwastable this is not a splitbrain
20:48 unwastable glusterbot: both afr changelog have the same value even though they are not zero, just want to know if it is safe to reset them to all zero
20:50 firemanxbr joined #gluster
20:51 NigeyS hey semiosis :)
20:52 semiosis unwastable: if it's not splitbrain then when you stat the file through a client mount glusterfs will zero the afr attrs automatically
20:52 semiosis ...and thats how you should resolve it
20:52 semiosis NigeyS: hey
20:52 NigeyS had some shocking results with gluster, might have to change our plans :(
20:55 semiosis care to share?
20:56 NigeyS it's pretty much what i told my colleagues from the get go, wordpress index.php load time over gluster, 6 - 18 seconds
20:57 NigeyS and they dont want to run varnish either, ever feel like your talking to a brick wall? :/
20:58 semiosis i really can't recommend gitlab highly enough if you decide to build your own git based deployment system
20:59 NigeyS I passed on that info, they were looking into moving everything to a git based system.
20:59 semiosis we're not hosting wordpress, but rather custom apps written with ZF2.  we wrote about a dozen modules in house & use about that many from outside, all managed with composer/packagist/satis
21:00 semiosis ymmv though
21:01 semiosis ZF2 was designed to enable this kind of workflow
21:01 NigeyS ah i see, we have about 90 websites, 90% of which is our own php CMS, but the few wordpress sites we do have are pretty heavy on traffic.
21:06 Philambdo joined #gluster
21:09 uebera|| joined #gluster
21:09 uebera|| joined #gluster
21:12 rafi1 joined #gluster
21:17 jmarley joined #gluster
21:18 badone joined #gluster
21:19 TealS joined #gluster
21:19 TealS left #gluster
21:30 rotbeard joined #gluster
21:32 unwastable glusterbot: the client didn't zero the afr changelog, however there are some issue in Samba that trigger a null setfattr, and added the value in the changelog for metadata change. The issue in Samba has been fixed, would like to know if it is safe to reset changelog to all zero by hand?
21:33 badone joined #gluster
21:41 tdasilva joined #gluster
21:44 semiosis unwastable: still wondering what version of glusterfs you're using.  in any case, it might be safe, but should be unnecessary if it's not split brain
21:44 semiosis you could test it yourself
22:27 m0ellemeister joined #gluster
22:31 KjetilK Gluster looks nice, but I'm not sure how commodity, "commodity" means :-) I just got an old Compaq BL10e blade server in the door yesterday. They have a PIII CPU with 1 GB RAM, I have 15 nodes with 40 GB disks, and 5 nodes that are diskless, but I don't need to use the diskless nodes
22:32 semiosis sounds like that will keep your house warm all winter
22:33 KjetilK taking a step back, what I would like to do is to install Debian Jessie and then be able to run all the nodes using the same OS, but it'd be great if they shared the filesystem, and that the filesystem was larger than that 40 GB. Finally, since it is 12 years old, the disks included, it must be expected that they die
22:33 KjetilK semiosis, oh, I can't keep it running, it would overheat my house :-)
22:33 semiosis haha
22:33 KjetilK (seriously, it is a passive house, it needs only half the heating that this box would provide :-) )
22:34 semiosis i understand
22:34 KjetilK so, yeah, I'm just going to run it when I need it for something
22:34 plarsen joined #gluster
22:34 semiosis gluster itself doesnt need much but you're probably not going to be too happy with the performance of those old disks & network cards
22:34 KjetilK so, the first question is if gluster would help me do this?
22:35 * KjetilK nods
22:35 semiosis if this is just a fun project for your education i'd say go for it.  it should work.  i doubt it would be useful for any real work though
22:35 semiosis just because of how old that hardware is
22:36 KjetilK my next use case is a small web spider, fetching a few thousand web sites, do some analysis on each, and spit out a few hundred RDF statements to a file
22:36 KjetilK so, the latency of the outside network is going to be significant anyway
22:37 semiosis lets do a little math... 12 years old, divided by 18 months = 8 moore's laws... so you could get a system today that's 1/256th the size and do the same amount of work... amiright?
22:38 semiosis so that's like... one raspberry pi ;)
22:38 KjetilK another thing I might do some day is to have my varnish cache send a wake-on-LAN ping to the box if it detects that my normal web server is getting more than usual traffic
22:38 KjetilK errrr
22:39 semiosis ok maybe not quite a raspi. but you get the idea
22:39 KjetilK yeah
22:40 KjetilK I think it is more of an issue for my time
22:40 KjetilK if it takes too much of my time to get it running, it is not worth it, if I can get it up by tomorrow night, and then I have it for some years, and gained some insight into distributed file systems, then it'd be cool
22:41 semiosis imho a new laptop with ssd would be a better investment
22:41 KjetilK it is kinda neat too, this is one of the first blade servers that hit the market
22:41 KjetilK it is an historical gem :-)
22:41 semiosis now, if you needed to heat a larger house...
22:44 KjetilK I should of course get a Haswell E5-2600 v3 :-)
22:44 KjetilK 18 cores...
22:46 KjetilK anyway, is there any specific documentation about using Gluster as the root filesystem?
23:00 semiosis not that i can think of.  people usually mount glusterfs for application data.  perhaps you could build an image to boot on the blades that mounts a gluster volume.
23:14 * KjetilK nods
23:24 gildub joined #gluster
23:26 vertex^ Hey guys, I see a lot of "I [dht-common.c:1822:dht_lookup_cbk] 0-rdo-dht: Entry /somefile missing on subvol myvolume-replicate-0" in the mount logs, should I be worried?
23:26 vertex^ The file doesn't exist, but it logs it
23:26 vertex^ I don't get why
23:26 semiosis something is trying to open a nonexistent file
23:26 vertex^ hmmm
23:27 vertex^ it appears to be Python, judging from the filenames
23:28 vertex^ as if it's trying to import from everywhere
23:28 vertex^ weird.
23:28 vertex^ [2014-11-26 23:24:22.149844] I [dht-common.c:1822:dht_lookup_cbk] 0-rdo-dht: Entry /etc missing on subvol rdo-replicate-1
23:28 vertex^ that makes no sense
23:28 vertex^ why is it looking for /etc
23:28 vertex^ :/
23:31 vertex^ Is there any way I can ignore warnings about files not existing?
23:31 vertex^ if I do "ll /nonexistantfile" I don't think that should warrant a log entry
23:46 vertex^ some verbosity config or something?

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary