Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:22 wdennis joined #gluster
00:33 roost joined #gluster
00:51 wdennis joined #gluster
00:52 wdennis hi all - going to use Gluster across two nodes (and possibly a third at some point) for an oVirt cluster's storage - if I utilize the native client, can the multiple nodes all share the same volumes?
00:53 wdennis (I tyhink so, but just checking - the oVirt docs say to use NFS, but no need if I don't have to, since all the oVirt nodes will host bricks for the same Gluster volumes)
01:05 Jmainguy wdennis: native gluster client?
01:06 wdennis Jmainguy: yes
01:06 Jmainguy wdennis: yeah that is no issue, you can even use nfs from gluster if you wanted to
01:07 wdennis Jmainguy: just wanted to make sure, as the oVirt docs assume NFS mount
01:07 Jmainguy wdennis: so you build your volume across multiple nodes, and then your clients mount the volumes, and many clients can mount one volume, via either gluster or nfs
01:07 Jmainguy yeah just treat gluster like nfs for any nfs requirements
01:07 wdennis Jmainguy: cool, thx
01:07 Jmainguy you should be solid
01:09 mikedep3- joined #gluster
01:11 roost joined #gluster
01:14 kdhananjay joined #gluster
01:18 soumya joined #gluster
01:46 julim joined #gluster
01:46 msmith joined #gluster
01:56 nangthang joined #gluster
02:02 Anjana joined #gluster
02:03 badone__ joined #gluster
02:03 smohan joined #gluster
02:06 roost joined #gluster
02:10 harish joined #gluster
02:36 smohan joined #gluster
02:38 wkf joined #gluster
02:42 shubhendu joined #gluster
02:51 smohan_ joined #gluster
02:55 misc JoeJulian: redirect what ?
03:01 poornimag joined #gluster
03:04 poornimag joined #gluster
03:17 Guest33730 joined #gluster
03:20 kumar joined #gluster
03:20 kanagaraj joined #gluster
03:37 jiku joined #gluster
03:43 itisravi joined #gluster
04:03 atinmu joined #gluster
04:22 smohan joined #gluster
04:23 pppp joined #gluster
04:27 sakshi joined #gluster
04:27 ppai joined #gluster
04:30 anoopcs joined #gluster
04:30 overclk joined #gluster
04:30 Triveni joined #gluster
04:36 vimal joined #gluster
04:38 ashiq joined #gluster
04:42 jiffin joined #gluster
04:44 Manikandan joined #gluster
04:48 iheartwhiskey joined #gluster
04:53 sripathi joined #gluster
04:54 ndarshan joined #gluster
04:54 ashiq joined #gluster
04:57 deepakcs joined #gluster
04:57 rafi joined #gluster
04:58 shubhendu joined #gluster
05:07 wkf joined #gluster
05:09 Bhaskarakiran joined #gluster
05:11 ashiq joined #gluster
05:14 schandra joined #gluster
05:18 iheartwhiskey left #gluster
05:18 iheartwhiskey joined #gluster
05:19 lalatenduM joined #gluster
05:19 raghu joined #gluster
05:20 foster joined #gluster
05:21 iheartwhiskey vhosts /Cloaks
05:21 iheartwhiskey oops
05:23 anil joined #gluster
05:23 hagarth joined #gluster
05:28 gem joined #gluster
05:29 iheartwhiskey /msg NickServ SETPASS iheartwhiskey xjnkouoeecha re8THaqu
05:29 iheartwhiskey /msg NickServ
05:30 iheartwhiskey left #gluster
05:30 SOLDIERz joined #gluster
05:31 kshlm joined #gluster
05:33 schandra joined #gluster
05:33 maveric_amitc_ joined #gluster
05:34 Philambdo joined #gluster
05:37 plarsen joined #gluster
05:43 rjoseph joined #gluster
05:43 nbalacha joined #gluster
05:46 aravindavk joined #gluster
05:50 jcastill1 joined #gluster
05:54 Guest33730 joined #gluster
05:55 jcastillo joined #gluster
05:57 foster joined #gluster
05:59 DV joined #gluster
05:59 atalur joined #gluster
06:01 lalatenduM joined #gluster
06:03 soumya joined #gluster
06:05 DV joined #gluster
06:07 gem joined #gluster
06:13 liquidat joined #gluster
06:19 jtux joined #gluster
06:19 Pupeno joined #gluster
06:19 Pupeno joined #gluster
06:22 kotreshhr joined #gluster
06:29 Anjana joined #gluster
06:33 Guest33730 joined #gluster
06:33 ghenry joined #gluster
06:33 ghenry joined #gluster
06:37 schandra joined #gluster
06:42 aravindavk joined #gluster
06:45 smohan_ joined #gluster
06:47 hgowtham joined #gluster
06:55 SOLDIERz joined #gluster
07:07 hgowtham joined #gluster
07:10 mbukatov joined #gluster
07:11 Guest33730 joined #gluster
07:15 lifeofguenter joined #gluster
07:16 haomaiwa_ joined #gluster
07:19 [Enrico] joined #gluster
07:21 SOLDIERz joined #gluster
07:22 glusterbot News from newglusterbugs: [Bug 1214169] glusterfsd crashed while rebalance and self-heal were in progress <https://bugzilla.redhat.com/show_bug.cgi?id=1214169>
07:29 horacer joined #gluster
07:32 DV joined #gluster
07:42 smohan joined #gluster
07:42 T0aD joined #gluster
07:43 fsimonce joined #gluster
07:46 gem joined #gluster
07:46 deniszh joined #gluster
07:47 deniszh left #gluster
07:48 deniszh joined #gluster
07:51 meghanam joined #gluster
07:54 lifeofgu_ joined #gluster
07:57 Slashman joined #gluster
08:06 ctria joined #gluster
08:20 ktosiek joined #gluster
08:23 gem_ joined #gluster
08:27 ghenry joined #gluster
08:28 SOLDIERz joined #gluster
08:30 Bhaskarakiran joined #gluster
08:31 ndk joined #gluster
08:32 Norky joined #gluster
08:35 ppai joined #gluster
08:35 sage_ joined #gluster
08:36 karnan joined #gluster
08:38 gem_ joined #gluster
08:43 atalur joined #gluster
08:44 mbukatov joined #gluster
08:46 mbukatov joined #gluster
08:47 al joined #gluster
08:50 [Enrico] joined #gluster
08:53 gem__ joined #gluster
08:54 mbukatov joined #gluster
08:56 mbukatov joined #gluster
09:00 lifeofguenter joined #gluster
09:16 kotreshhr1 joined #gluster
09:16 anrao joined #gluster
09:17 mbukatov joined #gluster
09:17 Anjana joined #gluster
09:17 mbukatov joined #gluster
09:21 mbukatov joined #gluster
09:22 Bardack joined #gluster
09:25 ashiq joined #gluster
09:26 poornimag joined #gluster
09:27 hgowtham joined #gluster
09:30 anrao joined #gluster
09:33 ppai joined #gluster
09:38 ira joined #gluster
09:42 kdhananjay joined #gluster
09:46 jcastill1 joined #gluster
09:48 lifeofgu_ joined #gluster
09:51 Manikandan joined #gluster
09:51 Manikandan_ joined #gluster
09:51 jcastillo joined #gluster
09:52 glusterbot News from newglusterbugs: [Bug 1214220] Crashes in logging code <https://bugzilla.redhat.com/show_bug.cgi?id=1214220>
09:52 glusterbot News from newglusterbugs: [Bug 1214222] Directories are missing on the mount point after attaching tier to distribute replicate volume. <https://bugzilla.redhat.com/show_bug.cgi?id=1214222>
09:52 glusterbot News from newglusterbugs: [Bug 1214219] Data Tiering:Enabling quota command fails with "quota command failed : Commit failed on localhost" <https://bugzilla.redhat.com/show_bug.cgi?id=1214219>
10:00 ira joined #gluster
10:01 Pupeno joined #gluster
10:11 Anjana joined #gluster
10:22 spandit joined #gluster
10:22 glusterbot News from newglusterbugs: [Bug 1214247] sharding - Implement remaining fops <https://bugzilla.redhat.com/show_bug.cgi?id=1214247>
10:23 glusterbot News from newglusterbugs: [Bug 1214248] Persist file size and block count of sharded files in the form of xattrs <https://bugzilla.redhat.com/show_bug.cgi?id=1214248>
10:23 glusterbot News from newglusterbugs: [Bug 1214249] I/O (fresh writes) failure of failure of hot tier <https://bugzilla.redhat.com/show_bug.cgi?id=1214249>
10:23 gem__ joined #gluster
10:34 raghug joined #gluster
10:36 lifeofguenter joined #gluster
10:36 harish_ joined #gluster
10:38 anrao joined #gluster
10:38 gem__ joined #gluster
10:40 DV joined #gluster
10:42 soumya_ joined #gluster
10:53 glusterbot News from newglusterbugs: [Bug 1214273] detach-tier o/p needs refinement <https://bugzilla.redhat.com/show_bug.cgi?id=1214273>
10:56 meghanam joined #gluster
10:57 lifeofgu_ joined #gluster
11:01 deepakcs joined #gluster
11:08 gem__ joined #gluster
11:09 meghanam_ joined #gluster
11:11 firemanxbr joined #gluster
11:11 SOLDIERz joined #gluster
11:17 atalur joined #gluster
11:20 ulimit joined #gluster
11:23 glusterbot News from newglusterbugs: [Bug 1214289] I/O failure on attaching tier <https://bugzilla.redhat.com/show_bug.cgi?id=1214289>
11:24 gem__ joined #gluster
11:27 lyang0 joined #gluster
11:42 DV__ joined #gluster
11:45 jcastill1 joined #gluster
11:47 LebedevRI joined #gluster
11:48 jdarcy joined #gluster
11:49 DV__ joined #gluster
11:50 hgowtham joined #gluster
11:50 jcastillo joined #gluster
11:54 gem__ joined #gluster
11:58 anoopcs joined #gluster
11:59 anrao joined #gluster
12:00 JustinClift *** Weekly Gluster Community Meeting is starting NOW on FreeNODE IRC #gluster-meeting ***
12:01 hagarth1 joined #gluster
12:04 side_control joined #gluster
12:09 kotreshhr joined #gluster
12:10 T0aD joined #gluster
12:12 sahina joined #gluster
12:13 soumya joined #gluster
12:17 Gill_ joined #gluster
12:18 rjoseph joined #gluster
12:18 ppai joined #gluster
12:23 kanagaraj joined #gluster
12:28 rafi1 joined #gluster
12:29 SOLDIERz joined #gluster
12:29 chirino joined #gluster
12:31 rafi joined #gluster
12:31 B21956 joined #gluster
12:39 gem__ joined #gluster
12:41 shubhendu joined #gluster
12:44 anil joined #gluster
12:48 Anjana joined #gluster
12:48 Philambdo joined #gluster
12:51 kotreshhr left #gluster
12:53 DV__ joined #gluster
12:57 rjoseph joined #gluster
12:58 Gill_ joined #gluster
13:01 atalur joined #gluster
13:05 rafi1 joined #gluster
13:08 DV__ joined #gluster
13:10 stickyboy joined #gluster
13:12 plarsen joined #gluster
13:18 dgandhi joined #gluster
13:19 dgandhi joined #gluster
13:21 dgandhi joined #gluster
13:22 dgandhi joined #gluster
13:23 dgandhi joined #gluster
13:24 dgandhi joined #gluster
13:25 dgandhi joined #gluster
13:25 glusterbot News from resolvedglusterbugs: [Bug 1199303] GlusterFS 3.4.8 Tracker <https://bugzilla.redhat.com/show_bug.cgi?id=1199303>
13:25 dgandhi joined #gluster
13:26 dgandhi joined #gluster
13:27 dgandhi joined #gluster
13:27 bennyturns joined #gluster
13:28 dgandhi joined #gluster
13:29 dgandhi joined #gluster
13:30 dgandhi joined #gluster
13:30 georgeh-LT2 joined #gluster
13:31 dgandhi joined #gluster
13:31 rafi joined #gluster
13:31 bene2 joined #gluster
13:33 dgandhi joined #gluster
13:34 dgandhi joined #gluster
13:35 dgandhi joined #gluster
13:36 DV__ joined #gluster
13:37 dgandhi joined #gluster
13:38 dgandhi joined #gluster
13:40 dgandhi joined #gluster
13:40 coredump joined #gluster
13:41 dgandhi joined #gluster
13:43 dgandhi joined #gluster
13:44 dgandhi joined #gluster
13:45 dgandhi joined #gluster
13:45 jmarley joined #gluster
13:46 dgandhi joined #gluster
13:46 harish_ joined #gluster
13:47 dgandhi joined #gluster
13:47 msmith_ joined #gluster
13:48 kanagaraj joined #gluster
13:48 msmith_ joined #gluster
13:48 dgandhi joined #gluster
13:49 dgandhi joined #gluster
13:50 dgandhi joined #gluster
13:52 dgandhi joined #gluster
13:53 glusterbot News from newglusterbugs: [Bug 1214339] Incorrect and unclear "vol info" o/p for tiered volume <https://bugzilla.redhat.com/show_bug.cgi?id=1214339>
13:54 dgandhi joined #gluster
13:55 dgandhi joined #gluster
13:57 dgandhi joined #gluster
13:58 dgandhi joined #gluster
13:58 dgandhi joined #gluster
13:59 dgandhi joined #gluster
14:02 nangthang joined #gluster
14:02 dgandhi joined #gluster
14:02 bene3 joined #gluster
14:03 neofob joined #gluster
14:04 dgandhi joined #gluster
14:06 dgandhi joined #gluster
14:08 dgandhi joined #gluster
14:09 dgandhi joined #gluster
14:10 dgandhi joined #gluster
14:12 dgandhi joined #gluster
14:13 dgandhi joined #gluster
14:13 wkf joined #gluster
14:14 haomaiwa_ joined #gluster
14:15 dgandhi joined #gluster
14:17 dgandhi joined #gluster
14:18 dgandhi joined #gluster
14:18 dgandhi joined #gluster
14:20 dusmant joined #gluster
14:20 dgandhi joined #gluster
14:21 SOLDIERz joined #gluster
14:22 dgandhi joined #gluster
14:22 wushudoin joined #gluster
14:23 dgandhi joined #gluster
14:25 dgandhi joined #gluster
14:26 dgandhi joined #gluster
14:27 SOLDIERz joined #gluster
14:27 plarsen joined #gluster
14:27 plarsen joined #gluster
14:28 dgandhi joined #gluster
14:29 dgandhi joined #gluster
14:30 squizzi joined #gluster
14:30 dgandhi joined #gluster
14:32 dgandhi joined #gluster
14:32 anrao joined #gluster
14:34 dgandhi joined #gluster
14:35 neofob joined #gluster
14:35 dgandhi joined #gluster
14:36 dgandhi joined #gluster
14:36 jobewan joined #gluster
14:38 dgandhi joined #gluster
14:39 dgandhi joined #gluster
14:40 dgandhi joined #gluster
14:41 neofob joined #gluster
14:42 dberry joined #gluster
14:42 dberry joined #gluster
14:42 dgandhi joined #gluster
14:43 dgandhi joined #gluster
14:44 dgandhi joined #gluster
14:46 dgandhi joined #gluster
14:47 coredump joined #gluster
14:47 dgandhi joined #gluster
14:48 dgandhi joined #gluster
14:49 dgandhi joined #gluster
14:51 partner umm it seems self-heal takes some much cpu it breaks the gluster and disconnects clients when bringing back the fixed hardware with its bricks, replica 2, gluster 3.4.5 on debian
14:51 dgandhi joined #gluster
14:52 partner any hints on how to tackle the issue? at the moment the self-heal daemon is disabled in order to resurrect the production
14:52 dgandhi joined #gluster
14:54 dgandhi joined #gluster
14:54 Pupeno joined #gluster
14:54 dgandhi joined #gluster
14:55 partner obviously cannot manually self-heal in a controlled manner as the volume has the settings effective
14:55 dgandhi joined #gluster
14:56 dgandhi joined #gluster
14:57 partner would/could it help if those different parts of heal would be enabled one at the time? metadata, data, entry,..?
15:03 coredump joined #gluster
15:03 roost joined #gluster
15:06 virusuy_work joined #gluster
15:07 Arminder- joined #gluster
15:08 klaxa|work joined #gluster
15:08 klaxa|work i think i asked this before, but we want to live-migrate with gfapi, so far unsuccessfully
15:09 cholcombe joined #gluster
15:09 klaxa|work oh wait
15:09 klaxa|work no actually, this is the right channel, i think
15:10 klaxa|work well the problem is that when we live-migrate a vm with libvirt from machine A to machine B, the vm keeps the gfapi connection to machine A
15:11 klaxa|work we want the vm to connect to the gfapi on machine B though so we can shut down machine A
15:11 wkf joined #gluster
15:12 hagarth joined #gluster
15:22 kshlm joined #gluster
15:23 kdhananjay joined #gluster
15:26 bene2 joined #gluster
15:26 Prilly joined #gluster
15:27 partner load is up to 30
15:29 coredump joined #gluster
15:30 partner hmm maybe dropping the background-self-heal-count from default of 16..
15:39 bennyturns partner, what is going on?  load is creeping up during a self heal?
15:40 partner yeah, replica counterpart was offline and it seems when the missing bricks are brought online (with self-heal daemon enabled) the cpu goes sky-high and eventually things start to break
15:41 T0aD joined #gluster
15:41 partner just wondering the options here to decrease the load
15:45 wdennis joined #gluster
15:48 soumya joined #gluster
15:54 glusterbot News from newglusterbugs: [Bug 1214387] Incorrect vol info post detach on disperse volume <https://bugzilla.redhat.com/show_bug.cgi?id=1214387>
15:54 glusterbot News from newglusterbugs: [Bug 1214393] Cannot access volume (from client) post detach <https://bugzilla.redhat.com/show_bug.cgi?id=1214393>
15:55 wdennis hi all -- added the requisite mount lines in /etc/fstab for my Gluster volumes (as per admin-guide > admin_settingup_clients.md) and when I boot the servers, the filesystems do not get mounted...
15:56 wdennis If I log in and exec a 'mount -a' then they do successfully get mounted
15:56 glusterbot News from resolvedglusterbugs: [Bug 1212822] Data Tiering: link files getting created on hot tier for all the files in cold tier <https://bugzilla.redhat.com/show_bug.cgi?id=1212822>
15:57 deepakcs joined #gluster
15:58 wdennis I assume the GlusterFS services are not running at mount time? What would I need to do to have the volumes mount successfully at boot time?
16:01 wdennis I'm running CentOS 7.1 with GlusterFS 3.6.2
16:01 partner you're correct, its not exactly a new issue either, not sure what is the current kosher "fix" for that, some run mount -a from rc.local, some might tune something else..
16:04 wdennis partner: o.O  Really?  What to "tune"?  (the 'run mount-a from rc.local' did occur to me, but HACK HACK HACK...)
16:07 lifeofguenter joined #gluster
16:07 partner dunno what is the proper fix for centos. the issue was described here when i bumped into it: https://joejulian.name/blog/glusterfs-volumes-not-mounting-in-debian-squeeze-at-boot-time/
16:08 wdennis Hopefully JoeJulian will see this and give me the official word...
16:10 ira There's at least one bug in this area that I know of :/
16:12 lifeofgu_ joined #gluster
16:12 wdennis partner: wow, from 2 yrs ago... Would have hoped that RHAT would have fixed on at least their distro by now...
16:26 glusterbot News from resolvedglusterbugs: [Bug 1214393] Cannot access volume (from client) post detach <https://bugzilla.redhat.com/show_bug.cgi?id=1214393>
16:26 wdennis ira: Seeing nothing in RHAT Bugzilla with search term 'GlusterFS mount boot'... Do you have a Bug #?
16:29 haomaiw__ joined #gluster
16:29 ira wdennis: I forget who has it... it deal with mount timing...
16:30 ira Mainly that mount.glusterfs returns before the volume has fully mounted.
16:30 JoeJulian I haven't seen that problem with Fedora.
16:32 JoeJulian wdennis: check the audit log. Maybe it's an selinux bug.
16:33 wdennis JoeJulian: selinux has been disabled on these hosts... But I will check the audit log nonetheless...
16:34 nangthang joined #gluster
16:39 wdennis joined #gluster
16:39 wdennis JoeJulian: not sure exactly what I'm looking for in the audit.log, but I do see this seemingly abnormal message:
16:39 wdennis type=ANOM_ABEND msg=audit(1429716384.944:60): auid=4294967295 uid=0 gid=0 ses=4294967295 pid=5106 comm="glusterfs" reason="memory violation" sig=11
16:44 maveric_amitc_ joined #gluster
16:44 hagarth joined #gluster
16:44 wdennis I also see this line in journalctl:
16:44 wdennis Apr 22 11:26:21 ovirt-node-01 mount[1328]: /sbin/mount.glusterfs: line 13: /dev/stderr: No such device or address
16:45 wdennis and immediately thereafter...
16:45 wdennis Apr 22 11:26:21 ovirt-node-01 systemd[1]: glusterfs-data.mount mount process exited, code=exited status=1
16:45 JoeJulian interesting
16:45 wdennis Apr 22 11:26:21 ovirt-node-01 systemd[1]: Unit glusterfs-data.mount entered failed state.
16:48 JoeJulian wdennis: can you fpaste the client log please?
16:49 wdennis JoeJulian: sorry, 'fpaste'?
16:49 JoeJulian @paste
16:49 glusterbot JoeJulian: For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
16:49 JoeJulian Or you can just open fpaste.org and copy/paste.
16:50 JoeJulian I'm just lazy.
16:50 meghanam joined #gluster
16:52 wdennis JoeJulian: cool, learned something new today :)
16:53 JoeJulian :)
16:54 wdennis JoeJulian: http://ur1.ca/k85wc
16:56 glusterbot News from resolvedglusterbugs: [Bug 1203776] Fix tiering bugs in DHT code. <https://bugzilla.redhat.com/show_bug.cgi?id=1203776>
16:56 glusterbot News from resolvedglusterbugs: [Bug 1212400] Attach tier failing and messing up vol info <https://bugzilla.redhat.com/show_bug.cgi?id=1212400>
16:58 jobewan joined #gluster
16:58 JoeJulian wdennis: which file is that? That looks like glustershd.log.
16:59 JoeJulian ... but I could be wrong. I haven't actually looked at a tracelog in a while.
16:59 JoeJulian Oh, wait... that's the cli log.
16:59 JoeJulian cli_cmd_submit
17:02 wdennis JoeJulian: yes, it was the cli.log
17:04 wdennis JoeJulian: was that not the correct one? (nothing else with 'cli' in /var/log/glusterfs/ )
17:05 JoeJulian /var/log/glusterfs/$($mountpoint | tr '/' '-').log
17:05 JoeJulian ... more or less.
17:09 wdennis JoeJulian: OK, try this one then... http://ur1.ca/k85z2   (/var/log/glusterfs/glusterfs-engine.log)
17:14 Rapture joined #gluster
17:22 JoeJulian wdennis: Oh, sure. That makes sense. You have enabled quorum and at the point you're trying to mount, it's not yet established.
17:22 JoeJulian hmm
17:25 hagarth joined #gluster
17:26 glusterbot News from resolvedglusterbugs: [Bug 1202893] resource agent script for ganesha HA uses `service` to start/stop ganesha.nfsd <https://bugzilla.redhat.com/show_bug.cgi?id=1202893>
17:26 glusterbot News from resolvedglusterbugs: [Bug 1206592] Data Tiering:Allow adding brick to hot tier too(or let user choose to add bricks to any tier of their wish) <https://bugzilla.redhat.com/show_bug.cgi?id=1206592>
17:27 partner so umm, any thoughts on decreasing the load on self-heal situation? i would think the cpu is a bit too little for the box, at least haven't experienced similar stuff with the other hardware earlier
17:27 partner ref "above couple of hours back"
17:28 wdennis JoeJulian: situation was, both nodes were booting at the same time...
17:29 wdennis If I just booted one or the other, the fstab mount would work?
17:31 partner load is 20 now when only daemon is healing and client-side is disabled, everything barely stays up and alive now
17:35 partner based on xattrop this should take 35 days during which time the files would be unaccessible..
17:36 wdennis OK, so if I boot any one system while the other is up, the Gluster filesystems do mount
17:38 partner wdennis: replica volume? yes
17:39 partner or doesn't even have to be that as long as any of the servers is reachable for the volume info
17:42 wdennis partner: yes, repl=2
17:43 wdennis hold on - I did bounce the other one, and it *did not* mount the Gluster filesystems..
17:44 nangthang joined #gluster
17:45 dusmant joined #gluster
17:46 partner sounds like you are talking about the serverside, not the client side, nevermind me then
17:53 wdennis wdennis: bounced same node again, and then it *did* mount correctly that time... seems a bit fragile...
17:53 jackdpeterson joined #gluster
17:58 wdennis ah well, #yolo... on with the show...
17:59 shaunm joined #gluster
18:03 klaxa joined #gluster
18:09 _Bryan_ joined #gluster
18:19 shaunm_ joined #gluster
18:25 partner i'm still open for suggestions to tackle my issue (repeating myself as people come and go)
18:27 JoeJulian wdennis: Sorry, I was dealing with a manufacturer's tech support. Another option for you would be to add a 3rd peer to allow your system to maintain server quorum.
18:28 JoeJulian partner: cgroups?
18:28 wdennis JoeJulian: yup, that's the plan, just haven't got there yet...
18:41 partner JoeJulian: i have no idea how the gluster would react, possibly it would stay alive better or then not.. but thanks for the tip, having a look at that
18:42 partner but need to figure out something, 250000 of the new files from the past 2-3 days are giving permission denied due to hw issues which lead to gluster issues
18:43 lifeofguenter joined #gluster
18:43 partner just feels a bit counterwise to limit when we are to my understanding way under needed capacity
18:52 Arminder joined #gluster
18:53 Arminder joined #gluster
18:56 Arminder joined #gluster
18:56 ricky-ticky joined #gluster
18:56 Arminder joined #gluster
19:00 Arminder joined #gluster
19:02 theron joined #gluster
19:02 marbu joined #gluster
19:04 partner does self-heal daemon prioritize any heals or goes through its list from xattrop in an order? we've disabled the client-side triggered healings but yet anything returning empty files/permission denied are shortly functional files
19:05 partner by prioritize i mean for example "hey that file just got accessed, i'll see into that soon"
19:05 marbu joined #gluster
19:10 mbukatov joined #gluster
19:11 jobewan joined #gluster
19:11 lkoranda joined #gluster
19:15 neofob joined #gluster
19:18 jcastill1 joined #gluster
19:23 jcastillo joined #gluster
19:25 Arminder joined #gluster
19:25 Arminder joined #gluster
19:37 rshade98 joined #gluster
19:50 theron_ joined #gluster
20:05 msmith joined #gluster
20:31 lexi2 joined #gluster
20:46 wkf joined #gluster
20:48 theron joined #gluster
20:53 ZakWolfinger joined #gluster
20:56 sage__ joined #gluster
20:57 ZakWolfinger Gluster 3.5.3 servers and client, volume is a replica 2.  When tar'ing a directory tree from the client I'm getting random "File removed before we read it" and if I try to copy the tree I get random "No such file or directory".  Looking on the servers, the files exist.  Volume heal gv0 info shows nothing needing healed.  Any thoughts?
21:00 sage_ joined #gluster
21:09 msmith joined #gluster
21:22 badone_ joined #gluster
21:41 theron joined #gluster
21:46 neofob joined #gluster
21:53 partner i guess it will run over night now, we will attempt Something(tm) tomorrow, loads of files not served still
21:53 partner daemon takes care of roughly 300 files per hour :(
21:55 partner and despite the "client won't trigger any heal" an access to a file will somehow get it shortly fixed, not on the fly but next attempt will succeed, i recall seeing a bug related
21:56 partner its ok, its not aggressive so its not bringing down the storage now, load seems to be steadily at 25 but clients get served
21:57 JoeJulian partner: You should write something up about this. I suspect there's a lot of people that would like to do what you're doing.
21:58 partner i'm just guessing :)
22:01 partner i would love to share everything but it just seems i never make it to the blog post level, "this" is my blog where i share the stuff
22:06 partner JoeJulian: but i do appreciate your persistance kicking me forward, unfortunately i seem to always disappoint you and the community on the topic :/
22:08 partner and as said earlier, i'm no longer maintaining this infra, just lending a helping hand to my former team and colleagues, this is just all new issue to me but perhaps the more the merrier :o
22:14 partner we have half of dozen of ideas on how to proceed but even if we can manage to fix 1 file per 1 second it will still take some 70 hours to get done.. best would be if we could allow clients to trigger the heal but as for now its impossible due to load issues
22:15 partner we just might try adding physical cpu (one socket free, donors available), perhaps try cutting down clients while still keeping production running, playing with various options available,...
22:16 partner any and every attempt will anyways take an hour or so to level up and realisticly see the effect for the better or worse
22:39 PaulCuzner joined #gluster
22:53 dgandhi joined #gluster
23:01 Rapture joined #gluster
23:03 deniszh joined #gluster
23:12 jvandewege joined #gluster
23:15 edwardm61 joined #gluster
23:27 msmith joined #gluster
23:46 gildub joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary