Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-05-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:06 nsoffer joined #gluster
00:06 eightyeight joined #gluster
00:07 eightyeight seems i have a FATAL running "glusterd --xlator-option '*.upgrade=on' -N"
00:07 eightyeight https://paste.debian.net/170342/
00:07 eightyeight JoeJulian, ^
00:08 JoeJulian Are you using rdma?
00:08 eightyeight i setup the volumes with "rdma,tcp"
00:08 eightyeight currently though, no. i'm not
00:08 eightyeight (2-node replication)
00:09 JoeJulian that shouldn't be fatal then...
00:09 eightyeight i'd be content just removing RDMA support, tbh
00:09 eightyeight if doable
00:09 eightyeight er, RDMA replication that is
00:12 JoeJulian under /var/lib/glusterd/vols/$vol I did "grep -r rdma ." and it found in "./bricks/$( echo $brick_path | tr '/' '-')" one line that reads rdma.listen-port=0
00:12 JoeJulian I suspect if you do the same and remove any lines other than those, it might solve that failure.
00:13 JoeJulian save a copy of /var/lib/glusterd, of course.
00:14 * eightyeight looks
00:15 eightyeight so, are you saying to remove the "rdma.listen-port=0" line?
00:15 JoeJulian No, I was saying that's the only line I would leave in.
00:15 JoeJulian If there are any other rdma hits.
00:16 eightyeight EG: https://paste.debian.net/170345/
00:17 eightyeight one of the hits: https://paste.debian.net/170346/
00:17 ghenry joined #gluster
00:17 ghenry joined #gluster
00:19 eightyeight seems there are actually port inconsistencies between the two nodes: https://paste.debian.net/170349/
00:19 eightyeight looks like just the android vol
00:24 JoeJulian You *should* actually be able to delete (or rename) all the .vol files, and that upgrade command should recreate them.
00:24 eightyeight i saved a backup, so i'm good to give it a try
00:25 eightyeight same error
00:25 eightyeight didn't recreate them
00:26 eightyeight is it safe to remove the "rdma.listen-port=0" lines?
00:26 JoeJulian I have a hack... one sec...
00:26 eightyeight again, i'm fine just replicating over tcp
00:26 JoeJulian Sure. I think I know how to work around this, I just have to find the right filename.
00:27 eightyeight ok
00:27 JoeJulian And something in one of these virtualbox instances on my computer is killing performance. Ugh!
00:27 eightyeight heh
00:27 eightyeight i appreciate your help, btw
00:29 eightyeight i removed the rdma.listen-port lines. still same fatal on the upgrade
00:31 * eightyeight backs up his /etc/glusterfs/ dir
00:31 JoeJulian mv /usr/lib64/glusterfs/*/rpc-transport/rdma.so /usr/lib64/glusterfs/*/rpc-transport/rdma.so.save
00:31 JoeJulian Oh, wait, your deb...
00:31 eightyeight could i modify /etc/glusterfs/glusterd.vol "option transport-type socket,rdma" ?
00:32 JoeJulian mv /usr/lib/glusterfs/*/rpc-transport/rdma.so /usr/lib/glusterfs/*/rpc-transport/rdma.so
00:32 JoeJulian mv /usr/lib/glusterfs/*/rpc-transport/rdma.so /usr/lib/glusterfs/*/rpc-transport/rdma.so.save
00:32 JoeJulian won't matter.
00:32 JoeJulian There's something in the fact that you actually have ib hardware and it's failing to do something with it.
00:32 eightyeight actually, that worked
00:33 JoeJulian I just tricked it into thinking you don't have ib hardware.
00:33 eightyeight didn't recreate the *.vol files though
00:33 JoeJulian start glusterd. it will.
00:33 * JoeJulian knows how to exude confidence.
00:33 eightyeight heh
00:34 eightyeight Starting glusterd service: glusterd failed!
00:34 * eightyeight moves out the rdma.so files
00:35 JoeJulian Oh, yeah, don't put it back unless you're ready to use rdma.
00:36 JoeJulian We'll come back to that then.
00:36 JoeJulian In rpm packaging, it's separate.
00:38 glusterbot News from newglusterbugs: [Bug 762766] rename() is not atomic <https://bugzilla.redhat.com/show_bug.cgi?id=762766>
00:39 eightyeight well, renamed the rdma.so files, and while i'm getting a successful exit code on the upgrade command, starting the init script fails
00:39 eightyeight i guess "glusterd --debug"?
00:40 eightyeight [2015-05-01 00:39:42.442610] E [xlator.c:403:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
00:41 eightyeight is this what it's complaining about? https://paste.debian.net/170352/
00:45 eightyeight got it
00:45 eightyeight my peer file did not have a FQDN for the other peer
00:45 eightyeight on both nodes
00:46 eightyeight sweet. peers are seen. volumes up and running.
00:46 * eightyeight tests mount and replication
00:47 eightyeight hmm. mount hanging
00:48 * eightyeight reboots the hypers first, before troubleshooting further
01:07 Pupeno joined #gluster
01:08 halfinhalfout joined #gluster
01:20 kdhananjay joined #gluster
01:29 halfinhalfout1 joined #gluster
01:31 halfinhalfout joined #gluster
01:36 gildub joined #gluster
02:16 kdhananjay joined #gluster
02:30 wushudoin joined #gluster
02:39 kdhananjay joined #gluster
02:47 atoponce joined #gluster
02:51 kdhananjay joined #gluster
02:55 Pupeno joined #gluster
03:18 Pupeno joined #gluster
03:18 Pupeno joined #gluster
04:02 edong23 joined #gluster
04:30 itisravi joined #gluster
05:17 atinmu joined #gluster
05:27 vimal joined #gluster
05:39 nsoffer joined #gluster
05:45 anrao joined #gluster
05:50 kdhananjay joined #gluster
05:53 chirino joined #gluster
06:04 RameshN joined #gluster
06:09 glusterbot News from newglusterbugs: [Bug 1217701] ec test spurious failures <https://bugzilla.redhat.com/show_bug.cgi?id=1217701>
07:01 lalatenduM joined #gluster
07:05 cholcombe joined #gluster
07:12 glusterbot News from resolvedglusterbugs: [Bug 1212008] Data Tiering: Unable to access existing data in directories of EC(disperse) volume after attaching tier <https://bugzilla.redhat.com/show_bug.cgi?id=1212008>
07:32 ktosiek joined #gluster
07:35 Pupeno joined #gluster
07:38 ctria joined #gluster
07:39 LebedevRI joined #gluster
07:48 soumya_ joined #gluster
07:49 cholcombe joined #gluster
08:02 ekuric joined #gluster
08:07 kovshenin joined #gluster
08:15 vincent_vdk joined #gluster
08:19 atinmu joined #gluster
08:28 Norky joined #gluster
08:33 kovsheni_ joined #gluster
09:02 elico joined #gluster
09:18 btspce joined #gluster
09:20 elico joined #gluster
09:22 btspce Hello im currently doing a self-heal since one of our nodes was restarted. Im unable to connect to the kvm guest as long as the self-heal is running on its image. Is this normal? I tried limiting with blockio.weight and cpu but nothing helps
09:23 btspce *Gluster 3.6.3 on EL 7.1
09:32 _Bryan_ joined #gluster
09:32 lalatenduM joined #gluster
09:34 jiku joined #gluster
09:58 kovshenin joined #gluster
10:00 kovshenin joined #gluster
10:08 hchiramm joined #gluster
10:09 nsoffer joined #gluster
10:10 rafi joined #gluster
10:10 glusterbot News from newglusterbugs: [Bug 1217711] Upcall framework support along with cache_invalidation usecase handled <https://bugzilla.redhat.com/show_bug.cgi?id=1217711>
10:13 kovshenin joined #gluster
10:14 rafi joined #gluster
10:19 kovshenin joined #gluster
10:22 Philambdo joined #gluster
10:27 kdhananjay joined #gluster
10:50 rp_ joined #gluster
10:50 meghanam joined #gluster
10:59 deniszh joined #gluster
11:00 jcastill1 joined #gluster
11:01 btspce would performance.client-io-threads: on help when doing self-heal on guest image?
11:05 jcastillo joined #gluster
11:06 anrao_ joined #gluster
11:10 glusterbot News from newglusterbugs: [Bug 1217722] Tracker bug for Logging framework expansion. <https://bugzilla.redhat.com/show_bug.cgi?id=1217722>
11:12 al joined #gluster
11:14 jcastill1 joined #gluster
11:16 social joined #gluster
11:16 social joined #gluster
11:19 jcastillo joined #gluster
11:22 bit4man joined #gluster
11:23 RameshN joined #gluster
11:39 kdhananjay joined #gluster
11:41 glusterbot News from newglusterbugs: [Bug 1217723] Upcall: xlator options for Upcall xlator <https://bugzilla.redhat.com/show_bug.cgi?id=1217723>
11:55 halfinhalfout joined #gluster
12:01 jmarley joined #gluster
12:16 ndk joined #gluster
12:24 ira joined #gluster
12:35 rafi joined #gluster
12:35 Gill joined #gluster
12:44 rafi joined #gluster
12:48 rafi1 joined #gluster
12:50 halfinhalfout joined #gluster
12:52 wkf joined #gluster
12:53 anoopcs joined #gluster
12:56 btspce is it safe to stop a healing process ?
13:02 halfinhalfout1 joined #gluster
13:12 cholcombe joined #gluster
13:18 shaunm_ joined #gluster
13:25 kshlm joined #gluster
13:30 rafi joined #gluster
13:38 ekman joined #gluster
13:40 hamiller joined #gluster
13:43 glusterbot News from resolvedglusterbugs: [Bug 1081870] Updating glusterfs packages reads 'rpmsave' files created by previous updates, and saves the files as <file>.rpmsave.rpmsave. <https://bugzilla.redhat.com/show_bug.cgi?id=1081870>
13:50 ktosiek joined #gluster
13:54 kovsheni_ joined #gluster
14:01 kovshenin joined #gluster
14:04 wushudoin joined #gluster
14:10 Twistedgrim joined #gluster
14:17 cholcombe joined #gluster
14:21 theron joined #gluster
14:27 jcastill1 joined #gluster
14:32 jcastillo joined #gluster
14:48 coredump joined #gluster
14:50 gnudna joined #gluster
15:01 lpabon joined #gluster
15:03 theron_ joined #gluster
15:06 kshlm joined #gluster
15:10 TheSeven joined #gluster
15:11 side_control joined #gluster
15:11 glusterbot News from newglusterbugs: [Bug 1217766] Spurious failures in tests/bugs/distribute/bug-1122443.t <https://bugzilla.redhat.com/show_bug.cgi?id=1217766>
15:11 nbalacha joined #gluster
15:20 TheSeven assuming I want to host some CIFS-accessible storage using glusterfs (HA), how does such a setup look like?
15:21 TheSeven does glusterfs have its own CIFS server, or does that rely on samba sharing a gluster mount?
15:26 Gill_ joined #gluster
15:34 msvbhat TheSeven: glusterfs do not have it's own CIFS server
15:34 theron joined #gluster
15:34 msvbhat TheSeven: But you can access through the samba plugin
15:34 msvbhat TheSeven: https://github.com/gluster/glusterfs/blob/master/doc/admin-guide/en-US/markdown/admin_settingup_clients.md#cifs
15:34 msvbhat That should be helpful
15:41 ktosiek joined #gluster
15:44 atoponce left #gluster
15:49 RameshN joined #gluster
15:59 Twistedgrim joined #gluster
16:00 ktosiek joined #gluster
16:05 btspce anyone using cgroups for glusterfsd ?
16:08 halfinhalfout1 no, but I'm curious too btspce
16:09 halfinhalfout1 s/no/not that I know of/
16:09 glusterbot What halfinhalfout1 meant to say was: An error has occurred and has been logged. Please contact this bot's administrator for more information.
16:10 halfinhalfout1 btspce: why are you interested in cgroups for glusterfsd?
16:12 btspce I did a self-heal on a kvm image after a node restart. And after a while the kvm guest becomes frozen, you can only ping it. nothing more. So I guess glusterfsd takes all the io it can get and locks the guest.
16:14 halfinhalfout1 yeah, I had a similar problem w/ the self-heal daemon and was thinking about trying cgroups as well.
16:14 halfinhalfout1 my use case diff, though. I have a large # of small files and sub-dirs
16:15 Twistedgrim joined #gluster
16:15 btspce smaller guest images that self-heals seems to survive this if the healing is done fast enough. Using cgroups and blkio.weight_device did not help on an already frozen kvm guest. restarting one of the nodes doing the heal immediately gets the guest alive again..
16:16 halfinhalfout1 I'm not sure it would be helpful in my case, as for me it seems to be an issue w/ the self-heal daemon holding a lock on a dir as it heals everything beneath it.
16:17 btspce so im now doing a new self-heal on this guest image while running with blkio.weight 200 enabled on glusterfsd
16:17 halfinhalfout1 I'm running gluster 3.5 .. .apparently there is a locking improvement in 3.6 that … only locks the dir long enough to check metadata of files/dirs within ghe dir
16:17 halfinhalfout1 that's interesting btspce
16:17 btspce so far so good.. but this must be a very common problem with very few answers it seems..
16:19 btspce This is on a four node setup, distribute+replicate, gluster 3.6.3 and EL 7.1
16:21 btspce I thought the locking of files was fixed in 3.5?
16:23 halfinhalfout1 https://bugzilla.redhat.com/show_bug.cgi?id=1177418 indirectly leads me to believe dir lock is improved in 3.6
16:23 glusterbot Bug 1177418: unspecified, unspecified, ---, bugs, CLOSED CURRENTRELEASE, entry self-heal in 3.5 and 3.6 are not compatible
16:23 btspce afk 30min
16:28 meghanam joined #gluster
16:31 roost joined #gluster
16:32 TheSeven how well would gluster be suited for hosting a hell of small files (e.g. user home dirs) via CIFS?
16:33 nbalacha joined #gluster
16:33 TheSeven it should be highly available, and I'd think about distributing it on between 2 and 8 servers
16:34 TheSeven could it handle such a workload well (performance wise) or would a setup with samba + gfs2 + drbd (or similar) be better?
16:34 ktosiek joined #gluster
16:36 rcschool joined #gluster
16:42 glusterbot News from newglusterbugs: [Bug 1217788] spurious failure bug-908146.t <https://bugzilla.redhat.com/show_bug.cgi?id=1217788>
16:52 Twistedgrim joined #gluster
16:52 bennyturns joined #gluster
16:52 Rapture joined #gluster
16:52 al joined #gluster
16:53 shubhendu_ joined #gluster
16:59 al joined #gluster
17:06 kovsheni_ joined #gluster
17:10 B21956 joined #gluster
17:20 cornfed78 joined #gluster
17:21 cornfed78 Hi all, I'm running into this issue:
17:21 cornfed78 https://bugzilla.redhat.com/show_bug.cgi?id=1088324
17:21 glusterbot Bug 1088324: medium, unspecified, ---, ndevos, CLOSED CANTFIX, glusterfs nfs mount can't handle flock calls.
17:21 cornfed78 but it's not quite the same
17:21 cornfed78 I'm having issues on the client, not mounting localhost
17:21 cornfed78 https://access.redhat.com/solutions/183483
17:21 cornfed78 verified with that KB article
17:22 cornfed78 http://fpaste.org/217601/05009371/
17:22 cornfed78 The issue manifests itself most obviously when PHP tries to use flock() to handle session files on a Gluster volume mounted over NFS
17:23 cornfed78 but the bugzilla comments lead me to believe this *should* work when mounting on another server
17:24 cornfed78 http://fpaste.org/217603/01059143/
17:24 cornfed78 the volume is very minimally configured
17:25 cornfed78 any one know how to have php flock() (or any flock(), I guess) work correctly with Gluster NFS shares?
17:25 cornfed78 I don't want to go down the route of simply putting 'nolock' as an nfs client option, as that seems like it could lead to possible issues, especially when using two HTTP servers to serve shared content
17:27 cornfed78 there seem to be many people having this issue, but I can't seem to find any "solution":
17:27 cornfed78 https://www.google.com/webhp?sourceid=chrome-instant&amp;ion=1&amp;espv=2&amp;ie=UTF-8#q=gluster+nfs+flock+site:www.gluster.org
17:28 cornfed78 here's another test:
17:28 cornfed78 http://fpaste.org/217605/30501319/
17:35 sixty4k joined #gluster
17:36 uxbod joined #gluster
17:42 sixty4k_ joined #gluster
17:53 hamiller joined #gluster
18:06 deniszh joined #gluster
18:15 nsoffer joined #gluster
18:36 ekuric left #gluster
18:56 shaunm_ joined #gluster
19:01 theron_ joined #gluster
19:13 rcschool_ joined #gluster
19:28 Philambdo joined #gluster
19:34 rcschool_ joined #gluster
19:35 jobewan joined #gluster
19:42 glusterbot News from newglusterbugs: [Bug 1058300] VMs do not resume after paused state and storage connection to a gluster domain (they will also fail to be manually resumed) <https://bugzilla.redhat.com/show_bug.cgi?id=1058300>
20:08 Rapture joined #gluster
20:17 rwheeler joined #gluster
20:19 Le22S joined #gluster
20:23 redbeard joined #gluster
20:30 Pupeno joined #gluster
20:33 btspce joined #gluster
20:35 atrius joined #gluster
20:40 theron joined #gluster
20:50 gnudna left #gluster
20:56 Gill joined #gluster
21:05 kovshenin joined #gluster
21:09 rcschool joined #gluster
21:14 kovshenin joined #gluster
21:17 bit4man joined #gluster
21:22 ultrabizweb joined #gluster
21:30 jeek joined #gluster
21:34 lexi2 joined #gluster
21:39 Pupeno joined #gluster
21:39 kovshenin joined #gluster
21:41 Pupeno joined #gluster
22:02 kovshenin joined #gluster
22:03 sixty4k_ left #gluster
22:10 sixty4k joined #gluster
22:26 kaos01 joined #gluster
22:40 theron joined #gluster
23:09 Pupeno joined #gluster
23:09 Pupeno joined #gluster
23:10 Pupeno joined #gluster
23:17 wkf joined #gluster
23:26 Philambdo joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary