Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:13 theron joined #gluster
00:18 ovaistariq joined #gluster
00:18 muneerse2 joined #gluster
00:52 plarsen joined #gluster
01:01 haomaiwa_ joined #gluster
01:06 EinstCrazy joined #gluster
01:22 theron joined #gluster
01:22 tswartz joined #gluster
01:33 sonicrose joined #gluster
01:35 sonicrose hi all… hoping someone can answer a quick question…  if I have a distribute volume with two 4TB bricks and two 6TB bricks, and I write 16TB of data, how would it be distributed?  would the 4TB bricks be full, or would it keep an even percentage of freespace on each brick?
01:36 sonicrose like would there be 4TB on each brick, or would it be something like 3 3 5 5
01:41 sonicrose my actual usage scenario is a bit more complicated than that, but i’m trying to simplify the terms
01:46 gbox I believe the 4TB bricks will fill up and the 6TB will still have 2TB free space each
01:55 klaxa joined #gluster
01:55 sonicrose uggghhh that sucks… any idea how to avoid that?
01:55 sonicrose maybe i need to explain the actual situation then
01:56 sonicrose maybe not, i might be ok
02:07 DV__ joined #gluster
02:07 haomaiwang joined #gluster
02:08 baojg joined #gluster
02:18 tyler274 joined #gluster
02:18 tyler274 question about replication
02:18 tyler274 is there any way to have a setup where I have a volume only be replicated on 1 node, but still have the metadata/arbiter on a specified head node
02:19 tyler274 as in, I have 1 head server, and 2 storage arrays, and I want to have a not-important-stuff volume that doesn't need any replication
02:20 tyler274 but would still like to have the metadata and such on my head node
02:20 tyler274 second question: is having an arbiter in a system with 3 replication nodes + 1 arbiter node possible? I don't have the third server ready yet but would appreciate knowing this in advance
02:22 harish joined #gluster
02:24 jhyland joined #gluster
02:25 farhorizon joined #gluster
02:26 baojg joined #gluster
02:30 muneerse joined #gluster
02:34 baojg joined #gluster
02:37 nbalacha joined #gluster
02:48 harish joined #gluster
03:01 haomaiwa_ joined #gluster
03:03 overclk joined #gluster
03:06 Lee1092 joined #gluster
03:07 jdossey joined #gluster
03:08 nehar joined #gluster
03:09 tyler274 anyone?
03:17 arcolife joined #gluster
03:21 baojg joined #gluster
03:27 DV joined #gluster
03:50 overclk joined #gluster
03:51 sakshi joined #gluster
03:53 ovaistariq joined #gluster
03:55 gem joined #gluster
03:56 sonicrose tyler… i’m not a super duper expert here… but i haven’t ever heard of an arbiter node in gluster.  as far as metadata, there is none really… all the info is stored with the files themselves as extended attributes.   you can create both replicated and non-replicated volumes on the same pool, using the same bricks
03:57 jhyland joined #gluster
03:59 shubhendu joined #gluster
03:59 tyler274 arbiter node is a special node in a replicate that stores only metadata and directory info to prevent split brains
04:00 tyler274 http://gluster.readthedocs.org/en/latest/Administ​rator%20Guide/Setting%20Up%20Volumes/?highlight=d​isperse#arbiter-configuration-for-replica-volumes
04:00 glusterbot Title: Setting Up Volumes - Gluster Docs (at gluster.readthedocs.org)
04:01 theron joined #gluster
04:01 tyler274 I'm looking to migrate from xtreemfs due to its subpar acl support, and my current setup had the headnode as the metadata store for the cluster
04:01 haomaiwa_ joined #gluster
04:02 sonicrose oh.. this is part of the “newish” distribute translator, i’ve not used that at all! sorry
04:02 itisravi joined #gluster
04:03 theron joined #gluster
04:03 sonicrose sorry… i meant disperse translator
04:03 sonicrose i haven’t played with that yet…  disregard what i said!
04:04 tyler274 yeah i was having difficulties finding info on disperse and the like
04:09 calavera joined #gluster
04:13 ovaistariq joined #gluster
04:13 atinm joined #gluster
04:20 haomai___ joined #gluster
04:24 sonicrose left #gluster
04:25 overclk joined #gluster
04:26 haomaiwa_ joined #gluster
04:30 ppai joined #gluster
04:31 ndarshan joined #gluster
04:32 kanagaraj joined #gluster
04:40 karthikfff joined #gluster
04:44 shubhendu joined #gluster
04:47 gem joined #gluster
04:48 nbalacha joined #gluster
04:51 RameshN joined #gluster
04:55 kotreshhr joined #gluster
04:58 Manikandan joined #gluster
04:58 pppp joined #gluster
05:01 haomaiwa_ joined #gluster
05:03 ashiq joined #gluster
05:04 jiffin joined #gluster
05:10 Apeksha joined #gluster
05:18 ndarshan joined #gluster
05:18 nishanth joined #gluster
05:19 kasturi joined #gluster
05:20 ggarg joined #gluster
05:25 aravindavk joined #gluster
05:28 shubhendu joined #gluster
05:29 harish_ joined #gluster
05:31 unforgiven512 joined #gluster
05:35 poornimag joined #gluster
05:37 baojg joined #gluster
05:42 Bhaskarakiran joined #gluster
05:43 gowtham joined #gluster
05:45 haomaiwa_ joined #gluster
05:47 karnan joined #gluster
05:47 kdhananjay joined #gluster
05:52 vmallika joined #gluster
06:01 harish joined #gluster
06:01 haomaiwa_ joined #gluster
06:04 arcolife joined #gluster
06:05 Manikandan joined #gluster
06:08 calavera joined #gluster
06:21 nangthang joined #gluster
06:23 baojg joined #gluster
06:25 kotreshhr joined #gluster
06:28 kdhananjay joined #gluster
06:32 ramky joined #gluster
06:35 David_Varghese joined #gluster
06:35 hgowtham joined #gluster
06:40 DV__ joined #gluster
06:44 Manikandan joined #gluster
06:45 DV joined #gluster
06:46 deepakcs joined #gluster
06:47 aravindavk joined #gluster
06:52 deepakcs left #gluster
06:52 kshlm joined #gluster
07:01 haomaiwa_ joined #gluster
07:02 kovshenin joined #gluster
07:02 kotreshhr joined #gluster
07:05 hchiramm joined #gluster
07:09 kayn joined #gluster
07:17 jtux joined #gluster
07:19 mhulsman joined #gluster
07:24 mhulsman1 joined #gluster
07:26 post-factum sonicrose: afaik, bricks are populated proportionally to their size
07:26 nehar joined #gluster
07:28 sakshi joined #gluster
07:40 mhulsman joined #gluster
07:44 kdhananjay joined #gluster
07:53 ekuric joined #gluster
07:55 mhulsman joined #gluster
07:56 arcolife joined #gluster
07:59 haomaiwa_ joined #gluster
08:01 haomaiwa_ joined #gluster
08:03 [Enrico] joined #gluster
08:05 kshlm joined #gluster
08:05 mhulsman joined #gluster
08:14 mhulsman joined #gluster
08:16 jri joined #gluster
08:21 ivan_rossi joined #gluster
08:24 jhyland joined #gluster
08:32 lord4163 joined #gluster
08:34 ashiq_ joined #gluster
08:34 jugaad joined #gluster
08:34 itisravi joined #gluster
08:39 F2Knight joined #gluster
08:48 atalur joined #gluster
08:53 harish joined #gluster
08:57 Slashman joined #gluster
08:57 ctria joined #gluster
08:57 Saravanakmr joined #gluster
09:01 haomaiwa_ joined #gluster
09:01 lord4163 joined #gluster
09:06 mhulsman1 joined #gluster
09:14 matclayton joined #gluster
09:14 ahino joined #gluster
09:19 matclayton_ joined #gluster
09:19 baojg joined #gluster
09:22 fedele joined #gluster
09:24 fedele good morning, I would ask you about a cluster configuration:
09:24 Manikandan joined #gluster
09:27 fedele Do you think is reliable an HPC cluster on which I installed a GLUSTERFS
09:28 fedele I'm explaining:
09:31 fedele I have an HPC cluster, each node has 1 extra disk and the network is Infiniband 40GB/s: on this cluster the users run MPI jobs. I have used the extra disk ofeach node to realize a GlusterFS.
09:32 hchiramm joined #gluster
09:33 bhuddah ...
09:33 bhuddah fedele: how many times are you gonna ask that again? ^^
09:33 fedele The question is: Do you think this cluster is safe to run MPI jobs and glusterfs on all nodes?
09:34 bhuddah depends on the definition of safe i think.
09:34 fedele #bhuddah: thank you for your patience
09:35 fedele safe in the sense: is it possible run MPI jobs and use the glusterfs?
09:36 fedele #bhuddah: one of my user say that after the gluster implementation its job is 50 % slower
09:37 fedele I made tests: especially linpack.
09:38 fedele linpack uses all the memory of each nodeand the results are strange:
09:38 fedele You remember: my cluster is 32 node
09:39 fedele If I run linpack on 16 nodes all is ok!
09:39 fedele It runs and the results are optimal and also gluster its fine
09:40 fsimonce joined #gluster
09:40 matclayton joined #gluster
09:41 fedele But if linpack runs on 32 nodes, one of the gluster node disconnects and the linpack don't finish
09:46 arcolife joined #gluster
09:54 EinstCrazy joined #gluster
09:55 bhuddah that sounds indeed a little bit strange.
09:56 fedele what: the linpack results or the user compliant?
09:57 bhuddah all of the above. sadly i can't give you any valuable input on that. i'm not experienced with HPC at all :(
09:58 fedele ok, but you run glusterfs and user jobs on your systems?
10:00 [diablo] joined #gluster
10:01 bhuddah nothing that can be translated for your situation sorry.
10:01 haomaiwa_ joined #gluster
10:01 fedele can you suggest where I can submit my problem?
10:02 bhuddah if not here ... there must be a mailing list, i think...
10:02 fedele thank you again bhuddah
10:02 bhuddah https://www.gluster.org/mai​lman/listinfo/gluster-users
10:02 glusterbot Title: Gluster-users Info Page (at www.gluster.org)
10:03 ira joined #gluster
10:07 mhulsman joined #gluster
10:07 kovshenin joined #gluster
10:16 EinstCrazy joined #gluster
10:18 fedele e-mail sent on glusterfs-users
10:31 felicity joined #gluster
10:33 felicity i have a three-server glusterfs filesystem (using replicate), and i'm seeing very slow performance for small writes: https://dpaste.de/yYdB - another cluster on nearly identical hardware runs the same test about 10x faster.  there's no io bottleneck, CPU is only ~10%, and network throughput is ~800Mbps with 0.5ms latency.  what else might be causing the slowness?
10:33 glusterbot Title: dpaste.de: Snippet #354363 (at dpaste.de)
10:34 matclayton joined #gluster
10:37 mhulsman1 joined #gluster
10:40 Manikandan joined #gluster
10:41 baojg joined #gluster
10:49 nehar joined #gluster
10:59 Akee joined #gluster
11:01 haomaiwa_ joined #gluster
11:03 purnima joined #gluster
11:05 hackman joined #gluster
11:07 mhulsman joined #gluster
11:29 arcolife joined #gluster
11:31 B21956 joined #gluster
11:31 bfoster joined #gluster
11:35 shyam joined #gluster
11:56 jiffin REMINDER: Gluster Community Bug Triage meeting in #gluster-meeting at 12:00 UTC
12:01 haomaiwa_ joined #gluster
12:02 johnmilton joined #gluster
12:04 Bhaskarakiran_ joined #gluster
12:16 Wizek joined #gluster
12:17 kshlm joined #gluster
12:19 gem joined #gluster
12:19 chirino joined #gluster
12:22 felicity hmm, looks like this was caused by upgrading to 3.7.8, switching to NFS seems to have worked around it
12:30 kanagaraj joined #gluster
12:36 gem joined #gluster
12:38 baoboa joined #gluster
12:53 raginbajin Has anyone done a rolling update from 3.7.2 to 3.7.8?  It seems that I keep running into issues that a node won't connect because the versions don't match.
12:54 raginbajin rolling upgrades/updates is pretty important as I can't keep taking down storage all the time.
12:59 jiffin felicity: http://review.gluster.org/#/c/13540/ should fix that issue
12:59 glusterbot Title: Gerrit Code Review (at review.gluster.org)
13:00 jiffin it was a performance regression caused in 3.7.7
13:00 puiterwijk joined #gluster
13:00 kanagaraj joined #gluster
13:01 puiterwijk RameshN: hi. I've been asked to review your gluster/nagios packages. Do you want to discuss them here, or move to PM?
13:01 felicity jiffin: i see, thanks
13:01 haomaiwa_ joined #gluster
13:05 Bhaskarakiran joined #gluster
13:08 mhulsman1 joined #gluster
13:10 julim joined #gluster
13:12 Trefex joined #gluster
13:14 nishanth joined #gluster
13:15 shubhendu joined #gluster
13:16 haomaiwa_ joined #gluster
13:18 RameshN joined #gluster
13:25 theron joined #gluster
13:29 jiffin1 joined #gluster
13:31 bennyturns joined #gluster
13:33 btpier joined #gluster
13:39 sebamontini joined #gluster
13:40 Apeksha joined #gluster
13:41 haomaiwang joined #gluster
13:43 Apeksha joined #gluster
13:46 theron joined #gluster
13:56 theron joined #gluster
13:57 aravindavk joined #gluster
13:59 ahino1 joined #gluster
14:01 DV joined #gluster
14:01 haomaiwa_ joined #gluster
14:02 DV__ joined #gluster
14:02 jdossey joined #gluster
14:09 mhulsman joined #gluster
14:10 baojg joined #gluster
14:12 Slashman joined #gluster
14:17 nehar joined #gluster
14:21 theron joined #gluster
14:24 nishanth joined #gluster
14:25 baojg joined #gluster
14:35 B21956 joined #gluster
14:36 kdhananjay joined #gluster
14:36 calavera joined #gluster
14:38 hamiller joined #gluster
14:39 kdhananjay joined #gluster
14:39 EinstCrazy joined #gluster
14:40 kdhananjay joined #gluster
14:49 Slashman joined #gluster
14:50 EinstCrazy joined #gluster
14:52 drankis joined #gluster
14:54 plarsen joined #gluster
14:55 skylar joined #gluster
14:59 theron joined #gluster
15:01 haomaiwang joined #gluster
15:04 shubhendu joined #gluster
15:15 nishanth joined #gluster
15:18 ahino joined #gluster
15:23 nangthang joined #gluster
15:24 jdossey joined #gluster
15:30 theron joined #gluster
15:38 bennyturns joined #gluster
15:42 theron joined #gluster
15:45 coredump joined #gluster
15:56 jhyland joined #gluster
15:57 farhorizon joined #gluster
16:01 haomaiwa_ joined #gluster
16:04 wushudoin joined #gluster
16:04 Gaurav__ joined #gluster
16:08 B21956 joined #gluster
16:09 wushudoin joined #gluster
16:20 arcolife joined #gluster
16:24 sonicrose joined #gluster
16:25 matclayton joined #gluster
16:26 sonicrose hi all.  i’m trying to add a new node to my pool, but I keep getting peer rejected.  I think it has to do with quota being enabled on the volumes…  in the past, i’ve had to do volume reset and disable quota, and I can add the node.  is there a workaround for this, is it a known issue?
16:26 sonicrose this is on 3.7.6-1 el6 x86_64
16:27 matclayton joined #gluster
16:54 shubhendu joined #gluster
16:59 JoeJulian sonicrose: What's your glusterd log say?
17:00 sonicrose on which node, the one being added or the one I’m adding it from
17:00 felicity left #gluster
17:01 JoeJulian I'd check both but if the new *server* is being rejected, it's probably in one of the existing peer's logs.
17:01 haomaiwang joined #gluster
17:01 JoeJulian I suspect it's a hash mismatch caused by a volume info file not being the same as the other servers.
17:03 sonicrose would that be in etc-glusterfs-glusterd.vol.log ?
17:03 JoeJulian yes
17:04 F2Knight joined #gluster
17:07 sonicrose just a sec, i’ll collect the logs
17:09 kotreshhr left #gluster
17:15 mhulsman joined #gluster
17:17 sonicrose ok, theres logs here…   i did the probe from fs130     http://sonicdigital.net/glog
17:17 glusterbot Title: Index of /glog (at sonicdigital.net)
17:17 sonicrose i do see some checksum mismatch messages
17:17 sonicrose but what causes this… its a brand new node with a fresh install
17:19 sonicrose the logs for fs131 thru fs133 seem to not have anything
17:20 sonicrose i’ve double checked that iptables is setup correctly everywhere
17:20 calavera joined #gluster
17:21 F2Knight joined #gluster
17:22 sagarhani joined #gluster
17:22 sonicrose wouldnt the volume info be copied from the existing peers, in which case how would the new node have a non matching checksum?
17:23 mlswiss joined #gluster
17:29 mhulsman joined #gluster
17:32 Bhaskarakiran joined #gluster
17:35 matclayton joined #gluster
17:38 arcolife joined #gluster
17:38 sonicrose well hmm… so in the past i was able to fix this by doing volume reset volxxxx force on all the volumes… but i just did that and I’m still getting rejected :/
17:39 sonicrose i even stopped gluster on the new node, and rm -rf /var/lib/glusterd… same result :/
17:40 jiffin joined #gluster
17:41 nathwill joined #gluster
17:42 JoeJulian sonicrose: I don't know why but I've been noticing a number of people with the same issue on the mailing list.
17:42 JoeJulian Copying the info file seems to fix it.
17:43 sonicrose how would i go about that?
17:44 shubhendu joined #gluster
17:44 JoeJulian The file is in /var/lib/glusterd/vols/$volname/info . I assume you're not asking how to copy it. :)
17:45 sonicrose just copy that file from any existing node onto the new node (before or after peer probe?)
17:45 ivan_rossi left #gluster
17:46 JoeJulian I think it would have to be after. Probably means restarting glusterd after it's copied, too.
17:46 sonicrose i’ll give that a try
17:49 sebamontini joined #gluster
17:49 nishanth joined #gluster
17:51 sonicrose diff info info2
17:51 sonicrose 16c16
17:51 sonicrose < quota-version=0
17:51 sonicrose ---
17:51 glusterbot sonicrose: -'s karma is now -355
17:51 sonicrose > quota-version=1
17:51 sonicrose 20,21d19
17:51 sonicrose < features.inode-quota=off
17:51 sonicrose < features.quota=off
17:51 sonicrose 22a21,22
17:51 sonicrose > features.quota=off
17:51 sonicrose > features.inode-quota=off
17:51 sonicrose gah.. sorry didnt mean to paste so much
17:52 sonicrose info is the info that was auto-created on the new node…   info2 is the info file from one of the existing nodes
17:52 sonicrose looks like it is quota related as i thought
17:53 JoeJulian Cool. Could you file a bug with that info? Hopefully the devs can figure out what's going wrong with that.
17:53 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
17:54 muneerse joined #gluster
17:56 sonicrose sure…  in the meantime, I copied the info file over but now glusterd wont start on the new node :/
17:56 sonicrose should i copy the whole /vols folder?
17:56 JoeJulian Yes
17:57 JoeJulian Can the new server resolve all the hostnames used in the brick definitions?
18:00 sonicrose using static IPs only
18:01 arcolife joined #gluster
18:01 sonicrose i defined the bricks using IP instead of hostname, hopefully to remove any delays caused by lookups
18:01 haomaiwa_ joined #gluster
18:01 JoeJulian That'll come back to bite you.
18:01 JoeJulian Were you encountering some sort of hostname resolution delay?
18:05 sonicrose not that i could tell… i just figured it was safer to use IPs only.  the systems all have hostnames though that are resolvable by DNS, and I have entried in /etc/hosts on all of them
18:05 robb_nl joined #gluster
18:06 raginbajin Has anyone done a rolling update from 3.7.2 to 3.7.8?  It seems that I keep running into issues that a node won't connect because the versions don't match
18:08 JoeJulian I have and my "nodes" connected just fine, at least all the "nodes" that are supposed to connect. My printers don't.
18:09 JoeJulian Why did my industry start using the word "node" for whatever thing they happen to be thinking about at the moment.... <sigh>
18:09 theron joined #gluster
18:11 matclayton joined #gluster
18:11 sonicrose haha
18:12 F2Knight joined #gluster
18:19 matclayton joined #gluster
18:20 sonicrose ut oh…  things just got worse…  i just restarted glusterd on one of the existing nodes and now it’s getting rejected too ...
18:22 sonicrose i compared all the info files betwen the servers… some list features.quota=off first, some list features.inode-quota=off first
18:22 farhorizon joined #gluster
18:23 sonicrose i think imma have to copy all the data off all the volumes, delete all the volumes and start all over
18:23 sonicrose what a pain in the butt :/   this is a horrible bug
18:31 kanagaraj joined #gluster
18:31 sonicrose guess i might as well update to 378 while i’m at it
18:32 and`_ joined #gluster
18:33 Larsen__ joined #gluster
18:34 rideh- joined #gluster
18:34 yosafbridge` joined #gluster
18:36 sonicrose soon, i wont have the ability to copy everything off and rebuild the volumes… so i guess for now i’ll just keep quota disabled
18:37 sonicrose then hopefully i’ll be able to add future notes without issue
18:37 sonicrose nodes*
18:48 matclayton joined #gluster
19:05 ilbot3 joined #gluster
19:05 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
19:07 dnoland1 joined #gluster
19:10 dataio joined #gluster
19:12 techsenshi joined #gluster
19:12 sonicrose disregard, i found 378 and 377 notes
19:14 Norky joined #gluster
19:18 post-factum joined #gluster
19:26 moss joined #gluster
19:29 kalzz joined #gluster
19:30 scubacuda joined #gluster
19:31 ackjewt joined #gluster
19:31 dataio joined #gluster
19:31 puiterwijk joined #gluster
19:31 Chr1st1an joined #gluster
19:31 steveeJ joined #gluster
19:31 d4n13L joined #gluster
19:31 tdasilva joined #gluster
19:31 Arrfab joined #gluster
19:31 Chinorro joined #gluster
19:32 syadnom joined #gluster
19:33 syadnom hi all, looking for a way to completely disable, or change, the nfs port gluster uses so it doesnt interfere with the system nfs server...
19:34 dnoland1 syadmom: use gluster volume get/set
19:34 jiffin syadnom: you can turn off nfs-server for all the volume using gluster v set <volname> nfs.disable on
19:36 jiffin syadnom: gluster nfs will be up only if it has volume to export
19:36 sonicrose just a followup to my earlier issue: I copied off all the data and deleted the volumes that previously had quota enabled.  Now I can probe that peer OK
19:38 syadnom jiffin, do you know how to configure what portmap is typing 111 to?
19:38 jiffin syadnom: nope
19:39 syadnom once I install gluster, nfs wont restart :(
19:39 syadnom rather the NFS daemon fails to start
19:42 petan joined #gluster
19:44 jiffin syadnom: u are sure that glusternfs is not running and all the ports used by it cleaned up properly
19:44 steveeJ I've got a volume with replica 3 and 3 bricks. they lost network connection for a second and now they're refusing to start. how do I let it start again?
19:45 syadnom jiffin, I don't have a glusternfs service...
19:46 jiffin syadnom: what does rpcinfo -p shows?
19:46 syadnom 100000    2   tcp    111  portmapper
19:46 syadnom 100000    2   udp    111  portmapper
19:46 jiffin steveeJ: did u meant bricks are offline?
19:47 steveeJ jiffin: probably yes
19:47 steveeJ but they're online again
19:47 jiffin steveeJ: it may take few seconds to establish connection
19:48 steveeJ jiffin: it has been many minutes. I tried to restard glusterd on the nodes but they fail to start, no quorum
19:48 jiffin syadnom: u can check the /var/log/messages to find hints
19:49 steveeJ shouldn't they at least wait a few seconds to see if they can establish a quorum?
19:49 steveeJ I can't possibly start them at the exact same time
19:49 jiffin for bringing back the bricks , u don't need to restart glusterd
19:50 jiffin gluster v start <volname> force will be enough
19:50 jiffin if the status of all the peers are in "connected state"
19:52 jiffin syadnom: if u didn't got any , try run the same with lower debug level using "rpcdebug" command
19:52 syadnom well, I found that the nfsd kernel module wasnt loaded
19:52 syadnom now I can get nfs to restart when I disable nfs on the gluster vol
19:54 ttkg joined #gluster
19:59 calavera joined #gluster
20:01 haomaiwa_ joined #gluster
20:02 theron joined #gluster
20:08 julim joined #gluster
20:09 ahino joined #gluster
20:22 matclayton joined #gluster
20:26 DV joined #gluster
20:32 mhulsman joined #gluster
20:43 ovaistariq joined #gluster
20:55 delhage joined #gluster
20:55 julim joined #gluster
20:58 farhorizon joined #gluster
21:00 farhoriz_ joined #gluster
21:01 sebamontini joined #gluster
21:01 haomaiwa_ joined #gluster
21:07 calavera joined #gluster
21:19 JoeJulian File level snapshots coming soon to a clustered filesystem near you: https://public.pad.fsfe.or​g/p/Snapshots_in_glusterfs
21:19 glusterbot Title: FSFE Etherpad: public instance (at public.pad.fsfe.org)
21:20 codex joined #gluster
21:21 codex left #gluster
21:30 post-factum so, snapshots are xfs-dependend?
21:42 tessier # gluster peer probe 10.0.1.21
21:42 tessier peer probe: failed: Probe returned with Transport endpoint is not connected
21:42 tessier Anyone know what causes this? Googling says firewall or the question goes unanswered.
21:42 tessier I've checked on both sides, no firewall. The brick servers are successfully serving to other machines and have been for months.
21:43 tessier But this is a new gluster client.
21:43 farhorizon joined #gluster
21:44 ovaistariq joined #gluster
21:52 deniszh joined #gluster
21:55 JoeJulian tessier: The only two paths to that error are a dns lookup failure, or an inability to connect to port 24007 on 10.0.1.21.
22:01 haomaiwa_ joined #gluster
22:06 farhoriz_ joined #gluster
22:07 DV joined #gluster
22:10 jdossey joined #gluster
22:12 arcolife joined #gluster
22:35 gbox joined #gluster
22:39 ayma joined #gluster
22:48 tessier JoeJulian: Weird...no dns involved here...
22:48 tessier # nc -v 10.0.1.21 24007
22:48 tessier Ncat: Version 6.40 ( http://nmap.org/ncat )
22:48 tessier Ncat: Connected to 10.0.1.21:24007.
22:48 glusterbot Title: Ncat - Netcat for the 21st Century (at nmap.org)
22:48 tessier I get a TCP connection...
22:49 glisignoli joined #gluster
23:01 haomaiwa_ joined #gluster
23:07 farhorizon joined #gluster
23:09 dnoland1 left #gluster
23:12 JoeJulian tessier: Sorry, had an impromptu meeting. I guess the next thing I would try, if it was me, would be maybe a wireshark capture? Or maybe run glusterd in the foreground in debug mode (glusterd --debug).
23:22 nathwill joined #gluster
23:27 bluenemo joined #gluster
23:46 ovaistariq joined #gluster
23:50 ovaistar_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary