Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-01-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:07 Vaelatern joined #gluster
00:08 Logos01 om: that is what I mean, yes.
00:09 Logos01 And as JoeJulian pointed out, if you have access to the configuration pieces necessary to use round-robin DNS (rrdns) or some DNS-level hostname resolution 'failover', then you could take that piece off of the gluster servers.
00:12 om ahh
00:12 om got it
00:13 Logos01 On the other hand, if you're *ALREADY* using a VIP on the systems (because, say, you're doing a bundled solution of all-the-things on each cluster-member due to an OS-node constraint) then hey, bob's your uncle.
00:18 om yea, I may have an issue when the brick servers are all down in one DC
00:19 om or worse if the client tries to connect to bricks in another dc
00:19 om when it should only connect to the bricks in it's own dc
00:19 om the volume spans 2 or more dc's
00:21 om it's replication volume only
00:21 Vaelatern joined #gluster
00:21 om so it's fine like that, just hope the client doesn't bug out when it tries to connect to brick servers in other dc's that it cannot reach networking wise anyhow
00:28 JoeJulian om, ok, now for the next thought experiment. Your DC-DC connection is down. Clients at both locations write to a file. Now you have two independent revisions of that file and they cannot be reconciled. You're in split-brain.
00:35 om clients in both locations won't be writing to the same file
00:35 om clients do not connect to both locations
00:36 om just one
00:36 om USA clients in USA
00:36 om Europe clients in UK
00:36 om so there shouldn't be split brain in this case
00:37 om I suppose if that happened, for whatever reason...  the file would need to be removed and regenerated...?
00:37 pdrakeweb joined #gluster
00:38 om or remove the files in one DC and let replication take the other dc files?
00:38 om not sure how gluster handles this
00:39 om looks like you wrote an article on that! https://joejulian.name/blog/gluster​fs-split-brain-recovery-made-easy/
00:39 glusterbot Title: GlusterFS Split-Brain Recovery Made Easy (at joejulian.name)
00:39 om hehehe
00:41 om but I don't think we will have split brain issues - it's unlikely for how we are using glusterfs
00:41 om with our clients...
00:42 Logos01 Have split brain because of clients?
00:42 Logos01 Split clients' brains!
00:42 Logos01 Problem solved.
00:42 Logos01 JoeJulian: I'm doing a bad thing by the way -- replicating the file content onto the hosts before making the volumes.
00:43 Logos01 It'll be interesting to see whether or not it can handle the self-heal once I actually make the volumes.
00:48 Vaelatern joined #gluster
00:50 JoeJulian Logos01: yes, that's bad. You create a race condition to see which client gets to set the gfid of the file. It's possible for multiple clients to do that simultaneously, then you have split-brain metadata.
00:59 auzty joined #gluster
01:00 shyam joined #gluster
01:01 haomaiwa_ joined #gluster
01:06 ahino joined #gluster
01:23 dlambrig joined #gluster
01:36 Lee1092 joined #gluster
02:01 haomaiwa_ joined #gluster
02:02 nangthang joined #gluster
02:11 sankarshan_ joined #gluster
02:13 harish joined #gluster
02:25 nangthang joined #gluster
02:41 plarsen joined #gluster
02:56 F2Knight joined #gluster
03:01 haomaiwa_ joined #gluster
03:19 Peppaq joined #gluster
03:27 overclk joined #gluster
03:27 Vaelatern joined #gluster
03:39 atinm joined #gluster
03:44 EinstCrazy joined #gluster
03:58 ramky joined #gluster
03:59 itisravi joined #gluster
04:01 haomaiwa_ joined #gluster
04:11 RameshN joined #gluster
04:12 kanagaraj joined #gluster
04:15 nbalacha joined #gluster
04:23 d0nn1e joined #gluster
04:24 ppai joined #gluster
04:26 kotreshhr joined #gluster
04:34 farhorizon joined #gluster
04:35 calavera joined #gluster
04:35 nehar joined #gluster
04:39 kdhananjay joined #gluster
04:41 atalur joined #gluster
04:41 bharata-rao joined #gluster
04:43 jiffin joined #gluster
04:48 RameshN joined #gluster
04:50 nishanth joined #gluster
04:53 haomaiwa_ joined #gluster
05:01 haomaiwa_ joined #gluster
05:03 skoduri joined #gluster
05:06 kshlm joined #gluster
05:08 gem joined #gluster
05:11 arcolife joined #gluster
05:14 ndarshan joined #gluster
05:24 julim joined #gluster
05:25 Manikandan joined #gluster
05:26 hgowtham joined #gluster
05:26 Apeksha joined #gluster
05:28 aravindavk joined #gluster
05:32 pppp joined #gluster
05:33 kdhananjay Andreas: Are you Andreas Tsaridas?
05:36 vmallika joined #gluster
05:40 overclk joined #gluster
05:41 CyrilPeponnet joined #gluster
05:42 ppai joined #gluster
05:42 coredump joined #gluster
05:45 poornimag joined #gluster
05:46 vimal joined #gluster
05:46 Bhaskarakiran joined #gluster
05:55 atalur joined #gluster
05:57 sac joined #gluster
06:01 haomaiwa_ joined #gluster
06:03 ashiq joined #gluster
06:06 rafi joined #gluster
06:22 kasturi joined #gluster
06:24 Humble joined #gluster
06:24 karnan joined #gluster
06:26 nangthang joined #gluster
06:28 anil joined #gluster
06:28 mowntan joined #gluster
06:28 coredump joined #gluster
06:30 atalur joined #gluster
06:36 Bhaskarakiran joined #gluster
06:38 nehar joined #gluster
06:41 deepakcs joined #gluster
06:41 arcolife joined #gluster
06:48 Saravana_ joined #gluster
06:56 EinstCrazy joined #gluster
07:01 Bhaskarakiran joined #gluster
07:01 21WAAPDYN joined #gluster
07:07 jtux joined #gluster
07:08 d0nn1e joined #gluster
07:08 SOLDIERz joined #gluster
07:08 Bhaskarakiran joined #gluster
07:15 kovshenin joined #gluster
07:17 d0nn1e joined #gluster
07:29 dgbaley joined #gluster
07:34 doekia joined #gluster
07:35 dusmant joined #gluster
07:42 deepakcs joined #gluster
07:44 rwheeler joined #gluster
07:44 mhulsman joined #gluster
07:56 hgowtham_ joined #gluster
08:01 overclk joined #gluster
08:01 haomaiwa_ joined #gluster
08:06 harish_ joined #gluster
08:20 hgowtham_ joined #gluster
08:25 karnan joined #gluster
08:35 overclk joined #gluster
08:36 thoht___ joined #gluster
08:36 thoht__ left #gluster
08:38 skoduri joined #gluster
08:44 fsimonce joined #gluster
08:53 haomaiwa_ joined #gluster
08:55 Slashman joined #gluster
09:01 haomaiwa_ joined #gluster
09:16 kotreshhr joined #gluster
09:18 Saravana_ joined #gluster
09:21 nbalacha joined #gluster
09:21 s19n joined #gluster
09:22 poornimag joined #gluster
09:34 Saravana_ joined #gluster
09:34 aravindavk joined #gluster
09:39 gem joined #gluster
09:43 kanagaraj joined #gluster
09:45 mhulsman1 joined #gluster
09:50 kotreshhr joined #gluster
09:52 atalur joined #gluster
10:01 haomaiwa_ joined #gluster
10:07 nbalacha joined #gluster
10:20 atinm joined #gluster
10:22 Humble joined #gluster
10:31 poornimag joined #gluster
10:37 ahino joined #gluster
10:51 atinm joined #gluster
10:52 ira joined #gluster
10:54 ira joined #gluster
11:01 haomaiwa_ joined #gluster
11:02 Vaelatern joined #gluster
11:04 Manikandan REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC (~in 60 minutes) in #gluster-meeting
11:05 kkeithley1 joined #gluster
11:06 hchiramm joined #gluster
11:17 sahina joined #gluster
11:25 spalai joined #gluster
11:34 bluenemo joined #gluster
11:37 atalur joined #gluster
11:40 atalur joined #gluster
11:42 hgowtham joined #gluster
11:47 kotreshhr joined #gluster
11:49 spalai joined #gluster
12:00 ramky joined #gluster
12:01 hgowtham joined #gluster
12:01 Manikandan REMINDER: Gluster Community Bug Triage meeting started in #gluster-meeting
12:01 haomaiwa_ joined #gluster
12:04 spalai left #gluster
12:05 skoduri joined #gluster
12:11 Humble joined #gluster
12:20 sankarshan_ joined #gluster
12:24 kokopelli joined #gluster
12:25 kotreshhr joined #gluster
12:31 ramky joined #gluster
12:36 d0nn1e joined #gluster
12:38 rwheeler joined #gluster
12:40 Humble joined #gluster
12:49 nbalacha joined #gluster
12:56 plarsen joined #gluster
12:56 rafi1 joined #gluster
13:01 haomaiwa_ joined #gluster
13:10 ashiq_ joined #gluster
13:15 chirino joined #gluster
13:18 cliluw joined #gluster
13:18 dusmant joined #gluster
13:18 Bhaskarakiran joined #gluster
13:21 Vigdis joined #gluster
13:21 Vigdis left #gluster
13:29 kotreshhr left #gluster
13:32 aravindavk joined #gluster
13:36 MACscr Whats the story with all these tmp mounts?
13:36 MACscr \_ /usr/sbin/glusterfs -s localhost --volfile-id volume1 -l /tmp/.glusterfs.log.25810 -p /tmp/.glusterfs.pid.25810 -LTRACE /tmp/.glusterfs.mnt.25810
13:38 MACscr full details http://pastie.org/pastes/1067118​9/text?key=lqvba3sssgkytenwjltpg
13:38 glusterbot Title: Private Paste - Pastie (at pastie.org)
13:40 haomaiwa_ joined #gluster
13:47 B21956 joined #gluster
13:51 glafouille joined #gluster
13:54 shaunm joined #gluster
13:58 atinm joined #gluster
14:00 Manikandan joined #gluster
14:00 unclemarc joined #gluster
14:07 7GHABXU1Y joined #gluster
14:09 shyam joined #gluster
14:18 ndevos MACscr: no idea, I cant find any references to /tmp/.glusterfs.mnt.<PID> in the sources, or the -LTRACE option
14:19 ndevos atinm: do you have an idea what would do a mount on /tmp/.glusterfs.mnt.<PID> ?
14:20 atinm ndevos, no :(
14:23 ndevos atinm: ok, well thanks for thinking about it :)
14:27 ndevos MACscr: you can try to find a process that uses those mounts, with something like this: ls -dl /proc/*/cwd 2>/dev/null | grep /tmp/.glusterfs.mnt
14:27 ndevos well, that is, if the process chdirs into that directory
14:28 onebree joined #gluster
14:28 ndevos MACscr: if a file has been opened, you can check with: ls -l /proc/*/fd 2>/dev/null | grep /tmp/.glusterfs.mnt
14:29 ndevos I think /proc is awesome :)
14:30 onebree hello, all
14:30 onebree happy new year
14:36 onebree Last week, I asked about rsync working with glusterfs. I found out HOW rsync misbehaves with our gluster volumes.
14:36 _Bryan_ joined #gluster
14:37 onebree as per my boss -- "to rsync it's checking the mounted volume, but gluster checks all of them when the volume is being accessed to make sure they are on all three bricks". Why does this happen, does anyone know?
14:38 ndevos onebree: could you explain the problem that you are facing? maybe the description makes more sense with that :)
14:39 liewegas joined #gluster
14:39 onebree ndevos: We want to use rsync to make sure a set of directories (of Asterisk-related files) is synced to a mounted instance of Gluster. From there, gluster will take all files and copy to 3 bricks.
14:39 kbyrne joined #gluster
14:40 arcolife joined #gluster
14:40 onebree Issue is that (again from boss) -- "the ls takes too long when there are a lot of files". The LS is either the one that rsync or gluster makes - I am not sure which.
14:41 onebree Files are things like voicemails, ogm, or other things saved and handled by Asterisk
14:41 arcolife joined #gluster
14:41 ndevos onebree: right, that might be pretty straight forward - note that a replica-3 does synchronous I/O, meaning that the mountpoint writes the files to all three bricks before it is marked as complete
14:42 onebree what is pretty straight-forward? The fix for this?
14:42 dgandhi joined #gluster
14:42 ndevos no, the cause that it is "too slow"
14:42 onebree To circumvent this, I was tasked to write a Ruby program that works similar to Rsync, but instead just checks file name and mod date.
14:43 dgandhi joined #gluster
14:43 ndevos like "cp --update ..."?
14:44 dgandhi joined #gluster
14:44 ndevos but still, the mountpoint (if you use fuse) will write the file to all three bricks at the same time, that effectively gives you 1/3 of the full bandwidth
14:45 dgandhi joined #gluster
14:45 ndevos if you have a faster network between the storage servers, you could mount over NFS, that will transfer the files to the NFS-server which will then do the replication for you
14:46 dgandhi joined #gluster
14:48 dgandhi joined #gluster
14:48 onebree ndevos: My ruby program does FileUtils.cp, preserve: true
14:48 ndevos onebree: oh, I dont speak Ruby :)
14:49 dgandhi joined #gluster
14:50 dgandhi joined #gluster
14:51 nehar joined #gluster
14:53 kbyrne joined #gluster
14:56 MACscr ndevos: crap, i think its this monitoring script that is causing the problem https://github.com/avati/glfs-he​alth/blob/master/glfs-health.sh
14:56 glusterbot Title: glfs-health/glfs-health.sh at master · avati/glfs-health · GitHub (at github.com)
14:57 squizzi joined #gluster
14:58 sponge joined #gluster
14:58 ndevos MACscr: yeah, that looks well possible
14:59 onebree ndevos: FileUtils_cp, preserve: true is the same as cp --preserve=mode,ownership,timestamps
15:00 kbyrne joined #gluster
15:01 haomaiwa_ joined #gluster
15:02 kbyrne joined #gluster
15:03 sponge hey, excuse me if this is a common question. When I was using gluster-native NFS to export volume, I noticed that service nlockmgr is with tcp version4 only, but not with udp.
15:04 coredump joined #gluster
15:04 sponge Is  that because the implemention of glusterfs?
15:07 shyam joined #gluster
15:08 farhorizon joined #gluster
15:15 ndevos sponge: it actually should try to register NLM over UDP too
15:15 ndevos well, I thought so at least
15:16 ndevos yeah, it tries this and logs an error if it fails: svc_register(transp, NLMCBK_PROGRAM, NLMCBK_V1, nlmcbk_program_0, IPPROTO_UDP)
15:16 sponge when I used command "# rpcinfo -p", there is no UDP v4
15:17 ndevos I dont think it there is a NLMv4? do you mean IPv4 ?
15:18 ndevos or, you dont happen to mean NFSv4? that is not an option with Gluster/NFS ar all...
15:19 sponge svc_register(transp, NLMCBK_PROGRAM, NLMCBK_V1, nlmcbk_program_0, IPPROTO_UDP)
15:19 sponge NLMCBK_v1
15:20 sponge but the program on client machine sends the NLM request as UDP v4
15:21 sponge I find this NLM request when using wireshark
15:22 nerdcore joined #gluster
15:23 ndevos ah, right, I think the protocol is called NLM4, but uses RPC-version=1
15:23 ndevos or something like that, the versioning of those bits is a little awkward
15:24 nerdcore I'm experiencing a lot of "Stale file handle" errors on an NFS-mounted gluster volume. The volume is a 3-brick replica volume, each server participating in it mounts the volume as 127.0.0.1:/vol-name and only one of the three servers is experiencing this error. The following output is seen in the nfs.log on the server which is exhibiting the "Stale file handle" error. Any ideas? http://nerdcore.net/mike/g​luster.nfs.log.1452007268
15:24 ndevos maybe more like, Network Lock Manager version 4, revision 1
15:25 javi404 joined #gluster
15:25 ndevos sponge: but yeah, when gluster/nfs starts it will try to register nlockmgr on UDP first, and only if that succeeds it will register on TCP
15:26 sponge :ndevos    glusterfs support NFSv3, and NFSv3 required NLMv4
15:26 glafouille joined #gluster
15:27 wistof joined #gluster
15:27 jobewan joined #gluster
15:27 sponge @ndevos   I'm using glusterfs-3.5.6
15:27 sponge on Ubuntu
15:28 harish joined #gluster
15:28 Don joined #gluster
15:29 ndevos sponge: just make sure you do not have the "lockd" kernel module loaded, it might block correct functioning of gluster/nfs (meaning you can not use nfs-mounts on a gluster storage server)
15:30 ndevos nerdcore: you really should not mount volumes (or anything really) over NFS on Gluster storage servers, use "mount -t glusterfs .." instead
15:31 nerdcore wow my gluster nfs.log seems to be spewing that same error every two minutes :/
15:31 nerdcore ndevos: I found the performance using NFS was superior to the performance using FUSE. Not sure why this is
15:31 sponge but when I using the command "#rmmod lockd", it is prompted that "ERROR: Module lockd is in use by nfsd"
15:32 nerdcore ndevos: I will go back to trying the fuse method for a while and see how it goes.
15:32 ndevos sponge: well, you should also not have the kernel nfs server running either ;-)
15:32 ndevos nerdcore: a "stale file handle" only means that something tried to access a file that was removed (or maybe renamed) by an other process
15:32 neofob joined #gluster
15:32 sponge the first I type the command like this "# rmmod nfs"
15:33 sponge then the command "# rmmod lockd"
15:33 sponge but there was still the prompt "ERROR: Module lockd is in use by nfsd"
15:33 ndevos nfsd is the kernel module for the nfs server, and you really should not try to run two nfs servers on one system
15:34 ndevos maybe there is a service that loaded the module? double check that all nfs services are disabled on boot, and only have rpcbind enabled
15:35 nerdcore ndevos: in this case, I'm quite sure nothing else has modified the file in question. It actually happens on each attempt to cat any file on the mount :(
15:35 nerdcore the file displays, followed by that error
15:35 nerdcore every file it seems
15:36 ndevos nerdcore: that is really strange... what NFS-client is that?
15:36 sponge yes, I noticed this too.  So I closed nfs-kernel-server using the command  "# /etc/init.d/nfs-kernel-server stop"  then "# gluster volume start gv0 force", and then using this "# rpcinfo -p", but can't find NLM with UDPv4, just NLM with TCP v4 at 32768
15:37 ndevos sponge: kill the gluster/nfs process before doing a "gluster volume start .. force"
15:38 sponge ok I'll try this
15:38 JoeJulian ndevos: Is that needed? I'm pretty sure I noticed shd restarts on a start force. Does nfs, too, or am I just wrong about it restarting?
15:39 ndevos JoeJulian: I always thought that it was needed... maybe I'm wrong?
15:39 nerdcore ndevos: regular system-packaged mount command on Ubuntu 14.04, mounting gluster volume as NFS on localhost
15:40 DonLin Hello everyone, I would like to ask for a word of advice: If I were to run GlusterFS with the buildin NFS server to host production VMware VM's, would that recommended usage or would I be asking for trouble? I have tested it (including failover with CTDB) and it seems to work fine, however I cannot find a lot of information on the subject (neither success nor failure).
15:40 ndevos nerdcore: hmm, what kernel version is that, and what architecture? I dont know why it would use a file handle that became invalid
15:41 nerdcore ndevos: 3.13.0-68-generic amd64
15:42 ndevos DonLin: people seem to run that, and I'm not aware of any major issues with it
15:42 nerdcore ndevos: the strangest thing to me is that only one of the three brick servers experiences this error, while all three are performing the same mount
15:42 nerdcore two of them seem to work just fine
15:43 ndevos nerdcore: and versions of gluster are all the same everywhere?
15:44 DonLin ndevos: Thanks, I'll test it further then
15:44 nerdcore ndevos: yes, 3.7.6-ubuntu1~trusty1
15:48 ndevos DonLin: there are some advices on configuring volumes for VM images, like https://access.redhat.com/documentation/en-US​/Red_Hat_Storage/3.1/html/Configuring_Red_Hat​_Enterprise_Virtualization_with_Red_Hat_Glust​er_Storage/chap-Hosting_Virtual_Machine_Image​s_on_Red_Hat_Storage_volumes.html#Configuring​_Volumes_Using_the_Command_Line_Interface
15:48 glusterbot Title: Chapter 3. Hosting Virtual Machine Images on Red Hat Gluster Storage volumes (at access.redhat.com)
15:49 wushudoin joined #gluster
15:49 ndevos nerdcore: could you mount the volume on a different path on the 'broken' storage server, and mount it from one of the other storage servers?
15:50 sponge ndevos: I just had a try. I thought I killed the gluster/nfs process because after using "# rpcinfo -p" the result shows "100000    4   tcp    111  portmapper" "100024    1   udp  51665  status" only
15:51 wushudoin joined #gluster
15:51 skylar joined #gluster
15:51 mowntan Guys, Is it safe to tar a brick from the backend of a replicate volume to provide a crude form of a backup?
15:52 nerdcore ndevos: mounting the "broken brick" on a working server produces the same error. Mounting a working brick on the broken server produces no error.
15:52 DonLin ndevos: Thanks, but it does not seem to apply to VMWare/NFS because it says: Important - After tagging the volume as group virt, use the volume for storing virtual machine images only and always access the volume through the glusterFS native client.
15:52 DonLin or am I missing something?
15:54 ndevos DonLin: well, not all settings apply when mounting over NFS, and I think Red Hat only tests over fuse mounts
15:54 ndevos (for the VM use case that is)
15:55 ndevos nerdcore: its confusing a little, you can not mount a brick over NFS, you can mount a Gluster volume from an NFS server...
15:56 ndevos nerdcore: I assume you mean "when mounting on the volume on a good server from the broken nfs-server, the broken nfs-server writes the same error to the log"?
15:56 nerdcore ndevos: yes i apologize. I am using bad terminology. In my case I have 3 servers participating in a number of gluster replicas, providing bricks for those replicas. One brick per volume per server. 3 server, 4 volumes
15:57 DonLin ndevos: Hmmm, Makes sense indeed when I look at the parameters
15:59 ndevos nerdcore: does the nfs-server log a "stale file handle" for all files, or only for files in a certain sub-directory?
16:00 nerdcore ndevos: i dont think the server is logging the stale file handle error. It is printed on stdout following file ops, such as `cat afile`
16:00 nerdcore although it is logging the errors shown. Also a new one I will post in a sec
16:01 haomaiwa_ joined #gluster
16:01 ndevos nerdcore: oh, so things are the other way around... hmm... it'll help of you can ,,(paste) them :)
16:01 glusterbot nerdcore: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
16:01 nerdcore This error is logged as soon as I mount the volume as NFS from the server which is exhibiting the issues: http://nerdcore.net/mike/g​luster.nfs.log.1452009638
16:01 glafouille joined #gluster
16:02 nerdcore I prefer to use my own paste sources, thx
16:03 bowhunter joined #gluster
16:03 ndevos oh, sure, whatever works, but glusterbot does not like pastebin.com
16:03 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
16:04 nerdcore when I cat a file on the volume which has been mounted as NFS from the server exhibiting the issues, nothing is printed to the server nfs.log but every single file produces a "Stale file handle" error
16:04 ndevos "RPC program version not available (req 100003 4)" means that the client tried to mount over NFSv4, and that will not work, most clients will fall back to NFSv3 automatically
16:05 nerdcore ndevos: I'm mounting with exactly the same command from a server which works as from a server which does not, so I'm not sure where any decision to use certain NFS version would occur
16:06 ndevos nerdcore: yeah, its just that the error you grabbed from the log is not relevant to the stale file handle that you get
16:06 mowntan When I look at the backend files, they look legitimate
16:06 sponge ndevos: I've browsed the source code of glusterfs-3.5.6, but could't decide the funcations that are related to register of nlockmgr. may I have some advice?
16:07 ndevos mowntan: you can do that, but not if you use a stripe or sharded volume - and make sure that your tar command handles hard-links well
16:09 mowntan ndevos: Fairly new to gluster, so my assumption is that replication and distribution is done if the data goes through fuse/nfs layer... the data being copied to disk is not obfiscated in any way
16:10 ndevos sponge: xlators/nfs/server/src/nlmcbk_svc.c and the like should help
16:10 mowntan ndevos: I have a 4 node cluster (replicate X4), looking to try some other variant (distributed-replicate) but dont want to lose my current data
16:10 mowntan ndevos: thanks
16:11 sponge ndevos: Thanks a lot!
16:11 nerdcore ndevos++
16:11 glusterbot nerdcore: ndevos's karma is now 25
16:12 ndevos mowntan: yes, in that case the data on the bricks would be the same, there are some special directories like .glusterfs that contain some Gluster internal details, and also Gluster sets some extended attributes on files
16:13 mowntan ndevos: so I can tar up the gluster directories and then re-configure a new cluster and pump the data back into the cluster using the fuse/nfs client?
16:14 mowntan ndevos: minus the .gluster dir of course.
16:14 ndevos mowntan: in certain cases Gluster creates "link files", that could confuse your backup/restore a little (empty files with only the T-bit/permission set)
16:15 calavera joined #gluster
16:15 JoeJulian mowntan: bsd tar backs up extended attributes by default.
16:15 ndevos mowntan: yes, restore through a fuse/nfs mountpoint on a different (or new) volume
16:17 MACscr anyone have a favorite nagios or check_mk plugin/check that they use with gluster?
16:18 F2Knight joined #gluster
16:25 mowntan Thanks Guys, I'll give it a go and let you know how it went
16:25 PsionTheory joined #gluster
16:26 ndevos good luck mowntan!
16:28 ndevos MACscr: uhm, glubix (on github) provides some templates for Zabbix...
16:28 ndevos and there is https://github.com/oVirt/gluster-nagios-monitoring too
16:28 glusterbot Title: oVirt/gluster-nagios-monitoring · GitHub (at github.com)
16:31 ndevos hmm, but I'm not sure if (or where) that contains any nagios scripts
16:34 abyss^ joined #gluster
16:43 Manikandan joined #gluster
16:43 atalur joined #gluster
16:56 klaxa joined #gluster
16:58 Manikandan_wfh joined #gluster
17:01 haomaiwa_ joined #gluster
17:04 coredump joined #gluster
17:05 bennyturns joined #gluster
17:18 ekuric joined #gluster
17:22 sponge ndevos: dear ndevos, are you here?
17:24 sponge ndevos: I have to return here to ask some questions. Because I was just successful in command "rmmod nfs" and "rmmod lockd"
17:25 sponge ndevos: Then "gluster volume start ... force" , but the result of "# rpcinfo -p" was still "4   tcp  38468  nlockmgr" only
17:29 ndevos sponge: do you have anything in the nfs.log from the time that you did the "start force"?
17:30 sponge let me have a check
17:34 sponge [2016-01-05 17:21:21.789841] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-gv1-client-0: changing port to 49155 (from 0) [2016-01-05 17:21:21.794371] I [client-handshake.c:1677:sele​ct_server_supported_programs] 0-gv1-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2016-01-05 17:21:21.796931] I [client-handshake.c:1462:client_setvolume_cbk] 0-gv1-client-0: Connected to 192.168.1.150:49155, attached to remote volume '/
17:34 sponge 44: end-volume
17:34 sponge 45:
17:35 sponge ----------------------------------------​--------------------------------------+
17:35 ndevos it would mention something about nlm
17:35 glusterbot sponge: ---------------------------------------​-------------------------------------'s karma is now -11
17:35 sponge [2016-01-05 17:21:21.789841] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-gv1-client-0: changing port to 49155 (from 0)
17:35 sponge [2016-01-05 17:21:21.794371] I [client-handshake.c:1677:sele​ct_server_supported_programs] 0-gv1-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
17:35 ndevos ~paste | sponge
17:35 glusterbot sponge: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
17:35 sponge [2016-01-05 17:21:21.796931] I [client-handshake.c:1462:client_setvolume_cbk] 0-gv1-client-0: Connected to 192.168.1.150:49155, attached to remote volume '/export/sdb5/brick2'.
17:37 sponge sorry, I'm a newcomer : >
17:37 sponge ~paste |  45: +---------------------------------------​---------------------------------------+ [2016-01-05 17:21:21.789841] I [rpc-clnt.c:1729:rpc_clnt_reconfig] 0-gv1-client-0: changing port to 49155 (from 0) [2016-01-05 17:21:21.794371] I [client-handshake.c:1677:sele​ct_server_supported_programs] 0-gv1-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2016-01-05 17:21:21.796931] I [client-handshake.c:1462:clien
17:37 glusterbot 45: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
17:37 glusterbot sponge: +---------------------------------------​-------------------------------------'s karma is now -3
17:37 ndevos sponge: so, if that was the time of the "start force", gluster/nfs got restarted, thats good
17:37 sponge sorry, I'm a newcomer : >
17:37 ndevos heh, np :)
17:38 sponge Server lk version = 1
17:38 sponge lk
17:38 ndevos the "nc termbin.com 9999" command can be used to paste a file or command output
17:38 sponge what's 'lk' mean
17:38 ndevos like: nc termbin.com 9999 < nfs.log
17:39 sponge ok
17:39 ndevos lk stands for lock, but that is not relevant for this, even if you care about NLM :)
17:40 ndevos the "nc" command will return a url, and you can post that url here, makes things easier to view, and doesnt clutter the channel so much
17:41 sponge http://termbin.com/9oce
17:43 sponge hey, I must say "thanks" for the trick of "nc termbin.com 9999 < nfs.log"
17:45 sponge you know, that's really good for getting this method
17:46 bennyturns joined #gluster
17:47 ndevos termbin.com is really cool!
17:48 sponge yes, precisely\
17:48 nathwill joined #gluster
17:49 ndevos sponge: could you do this too? rpcinfo -p | nc termbin.com 9999
17:50 sponge just a moment
17:50 sponge http://termbin.com/bcoa
17:51 sponge ndevos:http://termbin.com/bcoa
17:51 doekia joined #gluster
17:51 ndevos sponge: ok, so this is registered on udp: 100021    1   udp    868  nlockmgr
17:52 sponge but, the version is 1,not 4
17:52 ndevos yeah, that confuses me a little
17:52 sponge port 38468 with tcp v4
17:53 sponge but there is no udp v4 for nlockmgr
17:54 sponge Is this condition related to the implementation fo glusterfs?
17:55 ndevos sponge: yeah, it is
17:55 sponge that's mean that the implementation of glusterfs does not support nlmv4 on upd v4?
17:56 sponge do you mean that the implementation of glusterfs does not support nlmv4 on upd v4?
17:58 ndevos well, I'm in doubt now, maybe I've been looking at the wrong piece of code
17:58 sponge It has puzzled me for a long time.
18:00 sponge But I can't decide the reason is whether the implementation or not.
18:01 haomaiwa_ joined #gluster
18:06 sponge I've came into contact with glusterfs less than a month and just have a little understanding.
18:06 JesperA_ joined #gluster
18:07 ndevos sponge: ah, right, it seems that indeed only NLMv4 is available over TCP
18:07 sponge how can I learn more about glusterfs, how deeply?
18:07 Rapture joined #gluster
18:08 sponge What are the benefits?
18:08 ndevos sponge: xlators/nfs/server/src/nlm4.c contains the nlm4prog structure, and that is only initialized with gluster-rpc functions, and they are tcp only
18:09 sponge or what are the necessity
18:09 ndevos sponge: if you rely on NLMv4 over UDP, you may want to check if NFS-Ganesha supports that, it has a native Gluster backend (FSAL_GLUSTER) too
18:10 sponge you mean the funcation of 'nlm4svc_init' in file 'nlm4.c'?
18:12 sponge ndevos: the integration of NFS-Ganesha and glusterfs?
18:13 ndevos yes, indeed, that is the function that sets things up, and nfs_init_version() will call that
18:13 ndevos sponge: yeah, see http://gluster.readthedocs.org/en/​latest/Administrator%20Guide/NFS-G​anesha%20GlusterFS%20Intergration/
18:13 glusterbot Title: Configuring NFS-Ganesha server - Gluster Docs (at gluster.readthedocs.org)
18:14 sponge ndevos: a new topic for me ...
18:15 ndevos sponge: we're encouraging users to move to nfs-ganesha, it has a more current feature set for the NFS protocols
18:16 * ndevos leaves for the day, good night!
18:16 sponge ndevos: some addition question. how can I learn more about glusterfs?
18:17 ndevos sponge: send an email with questions about topics you would like to learn about to gluster-devel@gluster.org, that is where we can help you out with that
18:17 sponge you got the problem so soon, admiring
18:17 * ndevos really leaves now
18:17 sponge ndevos: thanks a lot!
18:18 ndevos cya sponge!
18:18 sponge see you
18:45 mhulsman joined #gluster
18:54 onebree I just ran into an issue. I was on a test remote server, and ran make uninstall in the gluster source we are using.... To undo that all I need is make install?
18:56 samppah joined #gluster
19:01 haomaiwa_ joined #gluster
19:02 JoeJulian onebree: I think that's why most of us use packages. All I can suggest is to try it and see.
19:03 onebree I need to use 3.5.3, which is not available as a package
19:03 onebree for me at least
19:03 JoeJulian What distro?
19:03 onebree centos 7
19:04 onebree When I search for it with yum, only the latest 3.7 appears
19:05 JoeJulian http://download.gluster.org​/pub/gluster/glusterfs/3.5/
19:05 glusterbot Title: Index of /pub/gluster/glusterfs/3.5 (at download.gluster.org)
19:06 onebree I see that now. But how do I use yum to install? When I did this originally, I was instructed to just download from source and tar it
19:09 JoeJulian There's a .repo file in /etc/yum.repos.d that you can edit.
19:11 JoeJulian And I have no idea who would recommend installing from source. Hell, even for applications we use at IO that are not packaged, we just package them ourselves and put them in an internal repo.
19:12 JoeJulian There's also a ton of crash, invalid pointer, race conditions and security fixes in 3.5.7 that aren't in 3.5.3. I strongly recommend upgrading.
19:13 onebree I think I mentioned this last week, upgrading is not my call. I don't even monitor gluster. I am just tasked with writing that ruby program
19:13 onebree But thank you for the heads up
19:19 onebree JoeJulian: I added the repo to yum.repo.d, but when I do yum install, it does not show the version number I am installing. How do I know if I am getting the version I want?
19:20 doekia joined #gluster
19:22 cornfed78 joined #gluster
19:22 rafi joined #gluster
19:23 JoeJulian Assuming yum list still shows the 3.7 package, you'll have to add an exclude=gluster* to the repo that you want it excluded from.
19:23 onebree Also, the 3.5.3 repo (assuming I am looking at the right one), points to the LATEST 3.5 version, which comes up as 3.5.7
19:23 JoeJulian Then edit the file, change LATEST to 3.5.3
19:24 onebree Oh, okay :-)
19:24 onebree I have not dealt with yum repos (add or edit) before. Thank you!
19:24 wushudoin joined #gluster
19:30 onebree I edited the the repo list to have LATEST, and again with 3.5.3. When I run yum list glusterfs, I only see 3.7.x
19:32 Rapture joined #gluster
19:33 JoeJulian scroll up
19:34 onebree oh didn't see that
19:35 onebree That worked, however, s/LATEST/3.5.3 still shows 3.5.7
19:36 JoeJulian Might be cached. `yum clean all`
19:37 onebree It cleaned, but still 3.5.7
19:37 onebree btw thank you for taking the time to help :-)
19:41 onebree It looks like all the repo files in 3.5/ point to LATEST, and not their respective versions
19:41 JoeJulian correct
19:42 JoeJulian That's why I suggested editing the file.
19:42 onebree Which I did, and it did not work
19:43 JoeJulian curl http://download.gluster.org/pub/gluster/glust​erfs/3.5/3.5.3/EPEL.repo/glusterfs-epel.repo | sed 's/LATEST/3.5.3/g' > /etc/yum.repos.d/glusterfs-epel.repo ; yum clean all ; yum list
19:48 onebree JoeJulian: thank you very much. This did the trick!
19:51 onebree JoeJulian `++
19:51 glusterbot onebree: `'s karma is now 2
19:52 onebree Not what I expected...
19:52 dscastro joined #gluster
19:52 dscastro hello
19:52 glusterbot dscastro: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:53 dscastro can i export sub-dirs from a volume?
19:54 JoeJulian no
19:54 bfoster joined #gluster
19:54 JoeJulian At least not with gluster native nfs. You could through nfs-ganesha
19:55 dscastro JoeJulian: do we have a penalty for creating lots of volumes?
19:58 bio_ joined #gluster
20:00 onebree Noobish question. In regards to the following link, is mountdir the client side? https://gluster.readthedocs.org/en/latest/Administ​rator%20Guide/Setting%20Up%20Clients/#manual-mount
20:00 glusterbot Title: Setting Up Clients - Gluster Docs (at gluster.readthedocs.org)
20:01 haomaiwa_ joined #gluster
20:08 bio_ i have a distributed, replicated volume with 4 nodes. everything working so far, just wondering why when removing bricks or rebalancing everything is done on all 4 nodes, but the status commands just show progress on 2 of those nodes: http://pastebin.com/AJV0jkud
20:08 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
20:09 bio_ ok, new paste: http://ur1.ca/odukt
20:09 glusterbot Title: #307542 Fedora Project Pastebin (at ur1.ca)
20:09 ajneil joined #gluster
20:10 ajneil wonder if anyone can help me with a dilemma?
20:11 DV joined #gluster
20:11 JoeJulian onebree: yes
20:13 onebree And the "test-volume" is the volume, which splits into bricks, right?
20:13 JoeJulian bio_: Don't take this as gospel, but I suspect in a replica 2 volume, it's only actively crawling on one of the two replica.
20:13 JoeJulian @glossary
20:14 glusterbot JoeJulian: A "server" hosts "bricks" (ie. server1:/foo) which belong to a "volume"  which is accessed from a "client"  . The "master" geosynchronizes a "volume" to a "slave" (ie. remote1:/data/foo).
20:14 JoeJulian ajneil: Probably not. Nobody knows what it is. :p
20:14 ajneil I have a 3 node cluster running centos 6.7, I  reinstalled one node with centos 7 and tried to add it back
20:16 ajneil the probe failed and I'm not sure but one of the two 6.7 nodes got detached too
20:17 ajneil I was able to reattach the other 6.7 node but the vols directory is empty and the volume info is not being synced from the known good node
20:20 ajneil var/lib/glusterd/vols that is
20:20 badSector joined #gluster
20:21 badSector Hello. I have following issue. I use Centos as front-end for glusterfs. There are users which access the gluster as samba shares (trunk port from switch to server and vlans). I got to the point where the vlans IP addresses are within the same subnets. Any idea how to overcome the issue on the server?
20:21 ajneil most of the volume are replica 3 if I rsync the vols directory from the known good server glusterd fails to start  saying it cannot reolve a brick, presubably from the centos7 system
20:22 bio_ JoeJulian: ah ok. server1 and server3 are actually the distributed ones, so that would make sense
20:22 bio_ thank you
20:22 JoeJulian ajneil: Did you name your bricks by hostname?
20:22 onebree JoeJulian: Is it possible to have server1:/foo be on the client, and the bricks be on remoteN:/bar
20:23 ajneil you mean gluster0:/export/brick1 ?
20:23 JoeJulian ajneil: yeah, and does the centos7 box have the same hostname?
20:23 JoeJulian Or, at least, does dns resolve for that host?
20:24 ajneil yes same hostname same ip address
20:25 onebree ajneil: how do you have rsync setup to work fine with gluster? I was mentioning earlier that a 3 brick system and rsync means a very slow sync time (longer than necessary 1 minute)
20:25 mhulsman joined #gluster
20:25 ajneil onebree: I was referring to rsyncing the /var/lib/gluster/vols directory
20:26 ajneil onebree: not a gluster volume
20:26 JoeJulian badSector: you'll have to NAT. Theoretically you can make it happen all using iptables, but I wouldn't want to be the one managing it. :D
20:26 onebree ajneil: thank you for that clarification
20:26 JoeJulian ajneil: And the original servers don't list the new server as a peer?
20:26 JoeJulian Or they do but it's rejected?
20:27 ajneil the centos 6.7 is in State: Sent and Received peer request (Connected)
20:27 ajneil the centos 7 is in State: Peer in Cluster (Disconnected)
20:27 ajneil on the known good system
20:28 ajneil on the other systems they only show the known good system as a peer State: Peer in Cluster (Connected) (ceState: Probe Sent to Peer (Connected) (centos 6.7)ntos 7) and
20:29 ajneil sorry garbled cut and paste
20:29 ajneil the centos 7 system only shows the known good server as a peer in  State: Peer in Cluster (Connected)
20:30 ajneil the centos 6.7 show the known good 6.7 server in state State: Probe Sent to Peer (Connected)
20:30 JoeJulian Here's what I suspect happened. You wiped the state directory during the resintall (/var/lib/glusterd). When you reinstalled and started glusterd the C7 box created a new uuid for the server. The C6 boxes know an old uuid and now they don't match. I suggest you wipe /var/lib/glusterd on the C7 box. Get the uuid from /var/lib/glusterd/peers on one of the C6 boxes and use that uuid to create a new /var/lib/glusterd/glusterd.info (use the known good a
20:30 JoeJulian s a template).
20:31 JoeJulian After that, start glusterd on the C7, probe it from a C6, and everything should populate.
20:31 ajneil ok I'll try it
20:34 ajneil the uuid would not be store in meta-data on the volume bricks by chance?
20:37 JoeJulian No
20:38 calavera joined #gluster
20:41 ajneil crap now glusterd is down on the known good box
20:41 ajneil Initialization of volume 'management' failed, review your volfile again
20:41 ajneil resolve brick failed in restore
20:48 onebree I am not sure what I did wrong. I `make uninstall` the source-gluster, and installed the rpm gluster successfully (and all other packages). When I try to mount, I get the error:
20:48 onebree => /usr/local/lib/glusterfs/3​.5.3/xlator/mount/fuse.so: cannot open shared object file: No such file or directory
20:52 badSector JoeJulian: thanks. I was just checking that and I am not sure if that would work. I have let say em2.10 ip 10.10.10.254/24 and then another em2.20 ip 10.10.10.253/24. So how would I implement the NAT then?
20:57 JoeJulian badSector: There's a reason I'm not a network engineer. ;)
20:57 badSector JoeJulian: I assume I would have to use different IP address for let say em2.20 ip 10.10.20.253/24 and perfom some kind of NAT before the packet hits 10.10.10.253 translating it to 10.10.20.253 and before leaves the vlan int again from 10.10.20.253 -> 10.10.10.235. Is it possible?
20:58 onebree Oh, I think it is because I do not have gluster-fuse, or gluster-rdma, and they are not availavble in the repo you showed me, JoeJulian
21:00 JoeJulian You only need to install -rdma if you're using rdma. And the -fuse package is there, so I'm not sure why you're having a problem.
21:01 ajneil hmm on the good node I now see this when I try and start glusterd  D [MSGID: 0] [glusterd-store.c:4118:glus​terd_store_retrieve_peers] 0-management: Returning with -1
21:01 haomaiwa_ joined #gluster
21:02 onebree JoeJulian: gist of yum list gluster* results - https://gist.github.com/on​ebree/c67e5daff4a8a3eed707
21:02 glusterbot Title: glusterfs* · GitHub (at gist.github.com)
21:05 JoeJulian badSector: http://kb.linuxvirtualserver.org/wiki/Two-​node_setup_with_overlapping_client_subnets might have some bits you can use.
21:05 glusterbot Title: Two-node setup with overlapping client subnets - LVSKB (at kb.linuxvirtualserver.org)
21:06 JoeJulian onebree: Looks like your base exclude isn't working.
21:06 badSector Thanks for pointing that. I will check it.
21:07 JoeJulian ajneil: try "glusterd --debug" to get enough info to actually diagnose something.
21:11 ajneil tried it not very illuminating
21:11 ajneil 2016-01-05 21:10:18.471337] D [MSGID: 0] [glusterd-store.c:4118:glus​terd_store_retrieve_peers] 0-management: Returning with -1
21:11 ajneil [2016-01-05 21:10:18.471344] D [MSGID: 0] [glusterd-store.c:4339:glusterd_restore] 0-management: Returning -1
21:11 ajneil [2016-01-05 21:10:18.471362] E [MSGID: 101019] [xlator.c:428:xlator_init] 0-management: Initialization of volume 'management' failed, review your volfile again
21:11 ajneil [2016-01-05 21:10:18.471369] E [graph.c:322:glusterfs_graph_init] 0-management: initializing translator failed
21:12 ajneil [2016-01-05 21:10:18.471373] E [graph.c:661:glusterfs_graph_activate] 0-graph: init failed
21:13 ajneil then it segfaults
21:15 ajneil is there some way to recreate the cluster from scratch reusing the bricks and metadata?
21:15 onebree JoeJulian: I did not use --excluderepo on purpose. I wanted to show the FULL listings of gluster packages I can get
21:16 onebree Well I stand corrected, never mind on that :-)
21:16 JoeJulian :D
21:17 cornfed78 joined #gluster
21:18 onebree Do I need glusterfs-api?
21:18 cornfed78 hi all.. quick question about nfs-ganesha & gluster- hope someone can help.. I'm trying to figure out how to restrict access to certain IPs on an exported volume. By default, it exports everything r/w to the world.. When using the kernel NFS, I can set options like "nfs.rpc-auth-allow" on a given volume, but that doesn't seem to work if it's exported with ganesha
21:18 JoeJulian Only if you use it.
21:19 cornfed78 is there a way, from gluster, to restrict access to exports using ganesha?
21:19 JoeJulian no
21:19 cornfed78 or do I need to manually edit the ganesha /etc/ganesha/exports file?
21:19 JoeJulian probably
21:19 cornfed78 ok
21:19 JoeJulian I'm not so confident with my ganesha configs yet.
21:19 cornfed78 Reason I ask is this big warning at the top of the export file:
21:19 cornfed78 # WARNING : Using Gluster CLI will overwrite manual
21:19 cornfed78 # changes made to this file. To avoid it, edit the
21:19 cornfed78 ;)
21:20 onebree JoeJulian:  was that to me? IDK if I need to use glusterfs-api
21:20 jobewan joined #gluster
21:20 JoeJulian Ok, let me look at the source and see where that comes from and if it does anything with permissions.
21:20 _Bryan_ joined #gluster
21:21 JoeJulian onebree: The api is a library interface to the filesystem. You /could/ program to that instead of going through fuse. qemu, for instance, does that.
21:21 onebree oh okay
21:22 onebree Well, it is needed anyway for a depoendency on glusterfs-devel
21:24 onebree With all the packages installed, I am getting better log messages. :-)
21:26 badSector I have one more question regarding gluster. One of the distributed replica bricks run out of the space. Getting in messages xfs_log_force: error 5 returned. cannot access. Found article saying  the filesystem is shutdown and to recover I would have to run xfs_repair command. That was for the XFS issue though but here I have a gluster running. Any idea what should I do to correct it?
21:28 JoeJulian cornfed78: Add your changes to /var/lib/glusterd/hooks/1/start/post/S31ganesha-start.sh
21:29 JoeJulian badSector: I would kill the glusterfsd process related to that damaged bricks.
21:29 JoeJulian s/s.$/./
21:29 glusterbot What JoeJulian meant to say was: badSector: I would kill the glusterfsd process related to that damaged brick.
21:30 badSector only on that affected node?
21:30 JoeJulian yes
21:31 badSector ok and then what? Try that xfs_repair command?
21:31 cornfed78 joejulian: thanks - i'll take a look at that. Looks like that would make the settings the same for all exports, though.. I'll toy around and see what I can figure out.
21:31 JoeJulian probably. Fixing kernel based filesystems is outside my scope.
21:32 badSector ok thanks. Will try that
21:33 cornfed78 ganesha is user space ;)
21:33 cornfed78 (or was that not at me? ;)
21:35 JoeJulian cornfed78: yeah, sorry, that was to bS.
21:53 ajneil ok immediate panic is over as I now have both 6.7 servers back up and in the cluster
21:54 onebree left #gluster
21:54 ajneil still need to figure out how to rattach my centos 7 one
22:00 JoeJulian +1
22:01 haomaiwa_ joined #gluster
22:08 ajneil ahh iptables
22:08 ajneil looks like iptables was blocking peer handshake *slaps forehead*
22:09 JoeJulian firewalld maybe?
22:15 ajneil probably - anyway I know have all three server up and clustered and the volumes are healing
22:16 ajneil *phew*
22:17 ajneil now I just have to readd the server in ovirt
22:29 haomaiwa_ joined #gluster
22:31 togdon joined #gluster
23:00 RedW joined #gluster
23:01 haomaiwa_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary