Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:18 samikshan joined #gluster
02:41 gem joined #gluster
02:50 overclk joined #gluster
03:06 Gambit15 joined #gluster
03:14 kramdoss_ joined #gluster
03:34 d0nn1e joined #gluster
04:24 raghug joined #gluster
04:26 DV joined #gluster
04:37 samikshan joined #gluster
05:18 Lee1092 joined #gluster
05:21 pgreg joined #gluster
05:47 nbalacha joined #gluster
06:01 aravindavk joined #gluster
06:08 pgreg_ joined #gluster
07:00 DV joined #gluster
07:04 sflfr joined #gluster
07:05 kramdoss_ joined #gluster
07:07 sflfr Hello ! My gluster node's are more than 100 of load average and glusterfsd seems to work hard. It become after i activate quota on many directory. It's ok or maybe my cluster have a problem ?
07:22 shortdudey123 joined #gluster
07:37 DV joined #gluster
07:44 kramdoss_ joined #gluster
07:48 ahino joined #gluster
08:10 msvbhat joined #gluster
09:00 sflfr left #gluster
09:57 kovshenin joined #gluster
10:49 atinm joined #gluster
10:59 JesperA joined #gluster
11:07 Javezim joined #gluster
11:22 aravindavk joined #gluster
11:23 gem joined #gluster
11:36 msvbhat joined #gluster
11:47 Gnomethrower joined #gluster
12:19 masuberu joined #gluster
12:35 ahino joined #gluster
12:48 Intensity joined #gluster
13:13 nehar joined #gluster
13:30 kovshenin joined #gluster
13:44 ahino joined #gluster
14:00 skoduri joined #gluster
14:04 harish joined #gluster
14:04 aravindavk joined #gluster
14:08 harish joined #gluster
14:10 johnmilton joined #gluster
14:10 harish joined #gluster
14:29 johnmilton joined #gluster
14:42 jwd joined #gluster
14:58 plarsen joined #gluster
15:00 msvbhat joined #gluster
15:25 gem joined #gluster
15:27 Lee1092 joined #gluster
15:38 msvbhat joined #gluster
15:46 plarsen joined #gluster
15:51 d0nn1e joined #gluster
15:58 Philambdo joined #gluster
16:26 Gambit15 joined #gluster
16:40 gem joined #gluster
16:47 plarsen joined #gluster
16:54 johnmilton joined #gluster
16:54 jwd joined #gluster
17:21 msvbhat joined #gluster
17:26 hagarth joined #gluster
17:34 Philambdo joined #gluster
17:41 msvbhat joined #gluster
17:47 jiffin joined #gluster
17:47 ahino joined #gluster
17:57 jiffin joined #gluster
18:05 kotreshhr joined #gluster
18:05 jiffin joined #gluster
18:17 zuzu joined #gluster
18:18 zuzu hello anyone here?
18:20 zuzu I get an error when creating a glusterfs volume on centos 7
18:20 zuzu just found the correct logs :)
18:20 zuzu "brick2.mount_dir not present"
18:33 jiffin zuzu: did creation got failed?
18:34 zuzu yeah, looking at network/dns now
18:34 zuzu I wasn't consistent with the first probes
18:34 zuzu used hostnames and IPs
18:34 zuzu didn't know the probe would matter later on
18:35 zuzu so trying to put everything on IP now
18:35 jiffin zuzu: it should
18:35 jiffin so what is the o/p gluster peer status
18:35 zuzu btw, was just following this http://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/
18:35 glusterbot Title: Up and Running with oVirt 3.6 and Gluster Storage oVirt (at www.ovirt.org)
18:35 jiffin zuzu: i am not ovirt expert but I can help u setting up with gluster
18:36 zuzu yeah I got to the part with creating volumes :)
18:36 zuzu so I have 3 nodes
18:36 zuzu centos7-{1,2,3}
18:37 jiffin zuzu: K
18:37 zuzu I detached the centos7-1 on the 2 other nodes
18:37 zuzu trying to get it up again now with IP ...
18:37 zuzu but might not have been the right thing to do
18:38 zuzu I'm knew to using glusterfs, can you tell? :P
18:38 zuzu s/kn/n/
18:38 glusterbot What zuzu meant to say was: I'm new to using glusterfs, can you tell? :P
18:39 jiffin it is not an issue I guess
18:39 jiffin did peer probe failed?
18:40 jiffin after peer probing, did u check gluster peer status on all the nodes?
18:40 hagarth joined #gluster
18:40 zuzu jiffin: http://paste.fedoraproject.org/381723/61617146/
18:40 glusterbot Title: #381723 Fedora Project Pastebin (at paste.fedoraproject.org)
18:40 zuzu no just on centos7-1
18:41 zuzu I would like to clean centos7-1 out again and start over
18:41 zuzu peer status seems borked
18:41 jiffin zuzu: the glusterd is running on all the nodes right?
18:41 zuzu I tried removing the peers by deleting the files there ...
18:41 zuzu yes
18:41 jiffin seems to firewalld issue
18:41 zuzu it is off everywhere
18:42 zuzu as per the guide
18:42 jiffin can u try to disable firewalld or flush iptables and try again
18:42 jiffin ?
18:42 jiffin k
18:42 yosafbridge` joined #gluster
18:42 zuzu what port is it? can telnet to verify
18:43 ninkotech joined #gluster
18:43 jiffin did u mean to say what port glusterd is using?
18:43 zuzu yeah
18:44 autostat1c joined #gluster
18:44 jiffin http://www.gluster.org/community/documentation/index.php/Basic_Gluster_Troubleshooting
18:44 Anarka_ joined #gluster
18:44 squeakyneb_ joined #gluster
18:45 siel_ joined #gluster
18:45 ndevos_ joined #gluster
18:45 ndevos_ joined #gluster
18:45 jiffin it seems to old link, but I guess it is right
18:45 kblin_ joined #gluster
18:45 zuzu 24007 seems to work everywhere
18:46 unforgiven512_ joined #gluster
18:47 unforgiven512_ joined #gluster
18:48 zuzu ok managed to detach everything on centos7-1
18:48 zuzu will probe on IP now
18:48 cogsu joined #gluster
18:48 Ulrar_ joined #gluster
18:48 arif-ali_ joined #gluster
18:48 mlhess- joined #gluster
18:48 unforgiven512_ joined #gluster
18:49 jiffin zuzu k
18:49 zuzu hmm I ran the probe on centos7-1
18:49 zuzu on the other nodes the Hostname of centos7-1 is not the IP but the name "glustermount" from /etc/hosts
18:50 zuzu the uuid is correct though
18:50 zuzu maybe this is the problem?
18:50 zuzu because glustermount is meant to be a round-robin DNS entry so will not always resolve to the same IP ...
18:51 jiffin can u try with IP instead of hostname>
18:51 jiffin ?
18:51 zuzu http://paste.fedoraproject.org/381736/62301146/
18:51 glusterbot Title: #381736 Fedora Project Pastebin (at paste.fedoraproject.org)
18:52 zuzu jiffin: thats what I did, but because of the /etc/hosts it must have changed it ...
18:52 jiffin zuzu: k
18:52 jiffin it might be problem
18:52 zuzu unless you want me to probe now on the other nodes on IP?
18:53 jiffin in futire
18:53 zuzu yeah
18:53 zuzu should edit /etc/hosts after creating the volumes right
18:53 jiffin rightnow peer status looks pretty good
18:53 jiffin you can do that
18:54 jiffin now try to create the volume again
18:54 zuzu ok
18:54 zuzu volume create: engine: failed: /gluster/engine/brick is already part of a volume
18:54 zuzu right
18:54 zuzu I just removed the attributes and the .glusterfs file?
18:55 zuzu something I found online by googling :)
18:55 jiffin or u can try force at the end to override that
18:55 jiffin i mean gluster v create ... force
18:55 jiffin :)
18:55 Vaizki joined #gluster
18:55 zuzu aha
18:56 zuzu thx
18:56 Guest89761 joined #gluster
18:56 zuzu nope :( : volume create: engine: failed: Commit failed on localhost. Please check the log file for more details.
18:57 lord4163 joined #gluster
18:57 zuzu brick2.mount_dir not present
18:57 jiffin ok check the logs
18:57 jiffin the file  wil be/var/log/glusterfs/etc-...log
18:57 Vaelatern joined #gluster
18:58 zuzu yeah it says brick2.mount_dir not present
18:58 zuzu should I detach again and remove the /etc/hosts entries and retry?
18:58 jiffin no
18:58 jiffin can say me the create command just used
18:59 zuzu gluster volume create engine replica 3 arbiter 1 192.168.109.162:/gluster/engine/brick 192.168.109.163:/gluster/engine/brick 192.168.109.164:/gluster/engine/brick
19:00 jiffin can u paste last few lines from the log?
19:00 zuzu http://paste.fedoraproject.org/381747/66362847/
19:00 glusterbot Title: #381747 Fedora Project Pastebin (at paste.fedoraproject.org)
19:04 jiffin zuzu: all machines are installed same gluster version right?
19:07 jiffin gluster --version
19:10 zuzu yes
19:10 zuzu 3.7.11
19:15 zuzu removed the /etc/hosts entries
19:15 zuzu restarted dnsmasq
19:15 zuzu and it is working now
19:15 zuzu volume create: engine: success: please start the volume to access data
19:15 zuzu thx jiffin for your help
19:19 jiffin zuzu: np
19:25 armyriad joined #gluster
19:35 msvbhat joined #gluster
20:43 jwd joined #gluster
21:26 PaulCuzner joined #gluster
21:44 PaulCuzner joined #gluster
22:24 harish joined #gluster
23:44 armyriad joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary