Camelia, the Perl 6 bug

IRC log for #gluster, 2013-05-22

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:17 chirino joined #gluster
00:31 glusterbot New news from newglusterbugs: [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
00:53 RicardoSSP joined #gluster
00:53 RicardoSSP joined #gluster
00:58 yinyin joined #gluster
01:00 a2_ joined #gluster
01:05 avati joined #gluster
01:11 majeff joined #gluster
01:16 vpshastry joined #gluster
01:26 a2_ joined #gluster
01:28 a2 joined #gluster
01:59 jdarcy joined #gluster
02:04 nightwalk joined #gluster
02:05 sgowda joined #gluster
02:49 rastar joined #gluster
02:55 wgao hi all, "gluster peer probe 192.168.0.2"  works well, and the status is connected , but in the other end(192.168.0.2) , the status is disconnected, what's wrong here ?
03:00 jag3773 joined #gluster
03:18 clag_ joined #gluster
03:19 lalatenduM joined #gluster
03:42 saurabh joined #gluster
03:44 bharata joined #gluster
03:56 majeff joined #gluster
04:04 vpshastry joined #gluster
04:04 shylesh joined #gluster
04:08 anands joined #gluster
04:23 yinyin joined #gluster
04:30 mohankumar joined #gluster
04:39 satheesh joined #gluster
04:56 satheesh joined #gluster
05:02 piotrektt joined #gluster
05:08 rastar joined #gluster
05:10 hagarth joined #gluster
05:12 vimal joined #gluster
05:17 sgowda joined #gluster
05:26 portante joined #gluster
05:28 bdperkin joined #gluster
05:30 aravindavk joined #gluster
05:37 majeff joined #gluster
05:39 bala joined #gluster
05:40 kshlm joined #gluster
05:48 satheesh joined #gluster
05:51 ngoswami joined #gluster
05:51 bharata joined #gluster
05:57 lalatenduM joined #gluster
06:06 satheesh joined #gluster
06:06 lalatenduM joined #gluster
06:09 rgustafs joined #gluster
06:14 theron joined #gluster
06:17 sgowda joined #gluster
06:18 ricky-ticky joined #gluster
06:19 guigui1 joined #gluster
06:23 hagarth joined #gluster
06:25 jtux joined #gluster
06:30 rastar joined #gluster
06:34 ollivera joined #gluster
06:46 vshankar joined #gluster
06:47 JoeJulian joined #gluster
06:48 vpshastry1 joined #gluster
06:51 bharata joined #gluster
06:52 shireesh joined #gluster
06:53 anands joined #gluster
06:56 badone vshankar: ping
06:56 rotbeard joined #gluster
07:00 JoeJulian joined #gluster
07:00 majeff joined #gluster
07:01 vshankar badone: pong
07:01 ekuric joined #gluster
07:01 badone vshankar: ever seen this before?
07:02 badone MASTER               SLAVE                                              STATUS
07:02 badone ----------------------------------------​----------------------------------------
07:02 ctria joined #gluster
07:02 badone icms                 ssh://hpnxxx02:/failicms                      OK
07:02 badone icms                 ssh://hpnxxx02:/failicms                      faulty
07:02 sgowda joined #gluster
07:03 vshankar two geo-replication sessions for the same master slave tuple
07:04 badone vshankar: it would appear so
07:04 badone vshankar: how would that happen?
07:04 vshankar what version of glusterfs ?
07:05 badone vshankar: glusterfs-3.3.0.7rhs-1.el6rhs.x86_64
07:06 saurabh joined #gluster
07:12 icemax joined #gluster
07:14 Guest79483 joined #gluster
07:16 icemax Hi guys. I'm looking for the documentation of "How to create a gluster between 4 servers with existing datas (50Go)" Gluster is installed, I already created the volume. But when I mount the fuse point, everything is fucked. Can you help me with that ?
07:16 icemax (sry about english mistakes)
07:18 anands joined #gluster
07:19 icemax Noone here ?
07:19 hagarth joined #gluster
07:30 tshm Usually not so many people around until in the afternoon, when the Americans awake.
07:31 Guest79483 joined #gluster
07:31 dobber_ joined #gluster
07:32 tshm @icemax: I'm sorry I can not help you myself, but perhaps you should be more specific about "everything is f*cked" and maybe somebody can help you.
07:33 icemax I will try
07:33 icemax So, I have 4 servers
07:33 harish joined #gluster
07:34 icemax I created a volume between those 4.
07:34 icemax 50G of datas are in the volume folder of each server
07:35 icemax then I do my mountpoint : mount -t glusterfs XX.XX.XX.XX:/volume_folder /mountfuse_folder
07:35 sgowda joined #gluster
07:35 icemax But Inodes look broken. I cannot read those files, I cannot use those files.
07:36 icemax Can you help me with that ? Is there a documentation talking about how to create a gluster with existing datas ?
07:38 Elendrys hi
07:38 glusterbot Elendrys: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:38 Elendrys icemax: i'm not an expert but how did you put the datas in the volume ?
07:39 icemax I did 'rsync' between servers
07:39 icemax I did that BEFORE mount the fusepoint
07:40 Elendrys So, you created a volume first (in the volume_folder) then copy data in this folder ?
07:40 icemax yes
07:40 Elendrys What kind of volume did you create ?
07:40 icemax replica volume
07:41 koubas joined #gluster
07:41 icemax Volume Name: mediaforum
07:41 icemax Type: Replicate
07:41 icemax Volume ID: 825fcbc6-8861-4c6f-b0f1-7aa6c4ea6ffa
07:41 icemax Status: Started
07:41 icemax Number of Bricks: 1 x 4 = 4
07:41 icemax Transport-type: tcp
07:41 icemax Bricks:
07:42 icemax Brick1: XX.XX.XX.XX:/volume_folder
07:42 icemax Brick2: XX.XX.XX.XX:/volume_folder
07:42 icemax Brick3: XX.XX.XX.XX:/volume_folder
07:42 icemax Brick4: XX.XX.XX.XX:/volume_folder
07:42 icemax like that
07:42 Elendrys I think it's not the good way to proceed
07:43 icemax I tryed to create the volume, make the mountpoint, and copy data on it, but the network become really really instable
07:43 Elendrys You should have : 1. Create Volumes. 2. Mount the volume on at least 1 server (where you had the datas). 3. Copy datas in the mount point. 4. Let the replicate self heal do the trick
07:43 icemax 4. Let the replicate self heal do the trick ? what does that mean ?
07:43 icemax can you explain it for me ?
07:43 icemax please
07:44 vpshastry1 joined #gluster
07:44 Elendrys it's a replicate. when you put a file into it, it will copy on each brick of the volume
07:45 icemax Ok, I will summary what you say, please tell me if I did understand ok ?
07:45 Elendrys ok
07:46 icemax 1. I create a new volume directory on each brick. An empty one.
07:46 icemax 2. I mount the volume only on 1 server. 3 others are NOT mounted (so there is just 3 /volume_folder empty)
07:46 icemax 3. I copy my datas (50G) into the first server, into the /mounfuse_folder
07:47 icemax 4. It works ?
07:47 icemax :)
07:47 Elendrys you should look at the network, i think it's the weak point that make the copy over 4 nodes fail
07:47 Elendrys it's supposed to be done this way
07:47 Elendrys the 4 servers are on the same site ?
07:47 icemax yes, same DC
07:47 icemax same switch ^^
07:48 Elendrys 1 Gbps ?
07:48 icemax sure
07:48 Elendrys same vlan ?
07:48 icemax in fact, I tried that solution once, but it was 700Go of data... maybe that why it doesn't work
07:48 icemax yes yes, same subnetwork
07:49 Elendrys Ok you resumed it well, you should try it and start with some hundreds of data
07:50 Elendrys and then check heal status and logs after some time
07:50 icemax you know what. I will 'mv' my /volume_folder into /volume_folder.back. The I will 'mkdir /volume_folder' and try your solution
07:50 icemax on each server
07:51 Elendrys yes normally you have to copy data via the client (fuse mount)
07:52 Elendrys if you mount on the same host then it's the same that you copy your data into the mount point as into the brick folder
07:52 majeff1 joined #gluster
07:52 icemax didn't understand your last sentence
07:53 Elendrys sorry i'm french :p
07:53 icemax a ben dis le moi en francais please ^^
07:53 Elendrys oh désolé
07:54 majeff joined #gluster
07:54 Elendrys techniquement que tu copies tes données sur le point de montage fuse ou dans le répertoire qui accueille ta brique, en terme de copie de données ça sera pareil
07:54 icemax c'est en terme de replication que ca change ?
07:55 Elendrys je dis peut etre une betise mais quand tu copies par le client le démon gluster il inscrit les metadonnées qui servent à la réplication
07:56 icemax qu'est ce que tu appelles les metadonnees ?
07:56 Elendrys attributs étendus par exemple
07:56 icemax ok
07:56 kron4eg joined #gluster
07:56 Elendrys j'ai un problème relatif à ça en ce moment
07:57 icemax bon, je mv mon dossier (pour pas perdre mes donnees) et je creer un nouveau repertoire vide. Je vais te dire si ca fonctionne
07:57 Elendrys tu verras pour chaque fichier copié de la bonne facon dans le volume
07:57 icemax oui ?
07:57 Elendrys si tu vas dans le répertoire de la brique il y a des attributs étendus
07:57 Elendrys (volume_folder)
07:58 Elendrys et un dossier .glusterfs qui contient des liens en dur sur les fichiers
07:58 icemax ouais
07:59 Elendrys tu peux vérifier en copiant un fichier directement dans ce répertoire (cad sans passer par le point de montage)
07:59 icemax ok
07:59 Elendrys et voir si tu as une erreur dans les logs, et si les attributs sont générés (mais je pense que non)
07:59 icemax d'ac
07:59 Elendrys après je suis pas un utilisateur avancé je peux pas te certifier ça a 100%
08:00 icemax ecoute jvais partir sur la solution de passer par le mountfuse. Jvais observer les logs au fur et a mesure.
08:00 icemax j'espere que ca va fonctionner
08:00 icemax apres tout 50G c'est pas grand chose
08:00 Elendrys non
08:00 jtux joined #gluster
08:00 icemax mais la toute premiere fois que j'ai essayé c'etait 600G, alors ptet que les serveurs ont pas aimé
08:00 rastar joined #gluster
08:01 Elendrys je sais pas moi j'ai lancé mes transferts de nuit
08:02 icemax ahah, ok, sauf que moi j'ai de la prod deriere ^^
08:02 icemax alors j'evite de dormir pendant les operations :)
08:03 Elendrys moi aussi
08:03 Elendrys enfin, les serveurs glusters n'ont été mis en prod qu'après
08:03 icemax ouais voila, moi ils sont deja en prod c'est pour ca
08:04 Elendrys effectivement
08:08 vpshastry joined #gluster
08:09 rb2k joined #gluster
08:20 kron4eg left #gluster
08:25 majeff1 joined #gluster
08:28 spider_fingers joined #gluster
08:28 icemax Elendrys : ca semble fonctionner. Mais ca prend du temps vu que ca replique sur 3 serveurs en meme temps. Merci pour ton aide en tout cas
08:31 majeff joined #gluster
08:34 majeff1 joined #gluster
08:35 majeff joined #gluster
08:40 majeff1 joined #gluster
08:41 ujjain joined #gluster
08:49 Guest79483 joined #gluster
09:05 glusterbot New news from newglusterbugs: [Bug 965987] rm is blocked on the fuse mount while trying to remove directories and files from the FUSE and NFS mount simultaneously <http://goo.gl/LszM3>
09:08 rastar joined #gluster
09:10 harish joined #gluster
09:35 majeff joined #gluster
09:36 rastar joined #gluster
09:43 Keawman joined #gluster
09:48 duerF joined #gluster
10:05 glusterbot New news from newglusterbugs: [Bug 966018] nfs: performance of untar of llinux kernel tarball is pathetic <http://goo.gl/QJoNq>
10:07 Nagilum_ hehe
10:07 dustint joined #gluster
10:11 manik joined #gluster
10:23 rastar joined #gluster
10:33 sonne joined #gluster
10:37 edward1 joined #gluster
10:47 hagarth joined #gluster
10:54 Elendrys @icemax : de rien
10:55 pkoro joined #gluster
10:56 Guest79483 left #gluster
10:59 isomorphic joined #gluster
11:01 duerF joined #gluster
11:05 majeff1 joined #gluster
11:06 Airbear_ joined #gluster
11:14 rastar joined #gluster
11:14 Airbear_ Hi, how should I deal with the following error when issuing gluster volume heal volname : "Self-heal daemon is not running. Check self-heal daemon log file."
11:14 chirino joined #gluster
11:14 Airbear_ glustershd.log shows no new information.
11:15 Airbear_ gluster volume status shows that Self-heal daemon is not running on a number of the gluster servers.
11:16 lpabon joined #gluster
11:16 Airbear_ I am guessing I want to restart glusterd on those servers where the self-heal daemon is stopped and then try again?  I'm just a bit scared to mess around without purpose!
11:21 anands joined #gluster
11:35 psharma joined #gluster
11:35 anands joined #gluster
11:37 psharma joined #gluster
11:38 piotrektt joined #gluster
11:38 majeff joined #gluster
11:43 majeff joined #gluster
11:45 Airbear_ .. This is gluster 3.3.1
11:47 majeff1 joined #gluster
11:49 bfoster joined #gluster
11:54 spider_fingers left #gluster
12:00 duerF joined #gluster
12:01 balunasj joined #gluster
12:01 majeff joined #gluster
12:07 guigui1 joined #gluster
12:15 manik joined #gluster
12:18 vpshastry joined #gluster
12:20 majeff1 joined #gluster
12:23 bennyturns joined #gluster
12:25 aliguori joined #gluster
12:31 hchiramm__ joined #gluster
12:46 vpshastry joined #gluster
12:50 Chocobo semiosis: Same problem even when using http://pastie.org/pastes/7940902/text
12:50 glusterbot Title: #7940902 - Pastie (at pastie.org)
12:50 Chocobo Thank JoeJulian I will have to check out dpaste
12:51 lalatenduM joined #gluster
13:03 manik joined #gluster
13:12 gicho joined #gluster
13:14 rwheeler joined #gluster
13:17 Chocobo JoeJulian: pastebinit looks pretty good too.  It has support for a lot of different paste sites.
13:17 hchiramm__ joined #gluster
13:18 majeff joined #gluster
13:19 gicho Hi, I need to move a existing 6 node glusterfs setup to a new datacenter with new IP range and DNS. Any docs or pointers on how to do the reconfiguration?
13:19 hagarth joined #gluster
13:23 majeff1 joined #gluster
13:27 majeff joined #gluster
13:28 anands joined #gluster
13:34 hchiramm__ joined #gluster
13:43 rwheeler_ joined #gluster
13:48 vpshastry joined #gluster
13:50 morse joined #gluster
14:07 guigui5 joined #gluster
14:09 plarsen joined #gluster
14:12 guigui1 joined #gluster
14:31 mohankumar joined #gluster
14:31 bugs_ joined #gluster
14:33 kaptk2 joined #gluster
14:34 Elendrys @gicho : if you move the physical server to the new infra with just a new ip set, i think this should not be an issue. Configuring servers with the new adress and filing the hosts file should work. I'm note sure of that if you have to change hostnames too, because you will have to change a lot.
14:35 Elendrys @gicho : you'd better ask an expert :)
14:36 sjoeboo_ so, i've got a number of servers where it seem the brick wo'nt start up (glsuterd DOES, i can see the volume info, and peers are al connected, tbut the brick doesn't come up, the brick LOG doesn't even get updated)
14:36 manik left #gluster
14:37 jtux joined #gluster
14:40 sjoeboo_ if i go look in the brick file on a given host, and find the listen-port, and then look for it, nothing seems to be listening ther.e
14:41 shylesh joined #gluster
14:41 rastar joined #gluster
15:00 vpshastry left #gluster
15:04 guigui3 joined #gluster
15:08 lpabon joined #gluster
15:18 shireesh joined #gluster
15:24 sprachgenerator joined #gluster
15:30 jthorne joined #gluster
15:33 piotrektt_ joined #gluster
15:39 lh joined #gluster
15:39 lh joined #gluster
15:48 portante joined #gluster
15:52 majeff joined #gluster
15:53 majeff1 joined #gluster
15:53 sjoeboo_ any docs on upgrading to 3.4 beta1?
15:53 sjoeboo_ or is it just stop glsuterd, upgrade, start
15:54 devoid joined #gluster
16:00 sr71 left #gluster
16:21 thomasle_ joined #gluster
16:32 kshlm joined #gluster
16:37 Mo__ joined #gluster
16:38 hagarth joined #gluster
16:40 hagarth joined #gluster
16:41 glusterbot New news from newglusterbugs: [Bug 965869] Redundancy Lost with replica 2 and one of the servers rebooting <http://goo.gl/rHFrW> || [Bug 953694] Requirements of Samba VFS plugin for glusterfs <http://goo.gl/v7g29>
17:02 hchiramm__ joined #gluster
17:06 sjoeboo_ so.....LONG story, but...
17:06 sjoeboo_ it seems if you take a 3.3.1 node, upgrade it to 3.4
17:06 sjoeboo_ and then remove downgrade back to 3.3.1
17:06 sjoeboo_ ALMOST everything works
17:06 sjoeboo_ except the bricks don't start back up
17:06 sjoeboo_ because /var/lib/glusterd/glsuterd.info ends up w/ a new uuid different from whats in the volume/peer list
17:07 sjoeboo_ changed that back and things went back to "okay"
17:07 sjoeboo_ for some value of okay
17:09 semiosis wow
17:09 semiosis i wonder at what point the uuid changed
17:11 sjoeboo_ yeah, not clear if it was the upgrade (i HOPE not), or the downgrade
17:11 sjoeboo_ all the volfile/brick/peer infor was preserved
17:11 sjoeboo_ THAT was a fun late night/all morning exercise
17:11 JoeJulian rpms?
17:11 sjoeboo_ yep
17:11 JoeJulian using "yum downgrade"?
17:12 sjoeboo_ yum downgrade wouldn't do it due to some dep cycles. yum remove glusterfs-*; yum install glusterfs-3.3.1.xxxxxx
17:12 sjoeboo_ i assume that install generated a new uuid.
17:13 sjoeboo_ the interesting thing was that starting glsuterd up seemed okay, and teh ndoe would join the cluster, but the brick wouldn't ever start up.
17:16 JoeJulian Well you should definitely file a bug report. I'm digging through the rpms trying to see what would touch that file. I'm not finding anything yet.
17:16 glusterbot http://goo.gl/UUuCq
17:18 JoeJulian Well, there's nothing obvious.
17:18 andrewjsledge joined #gluster
17:21 kkeithley Let me know what you find. In theory the rpms don't touch existing files. They should install a -gluster version where there's an existing config file.
17:30 Airbear joined #gluster
17:32 * JoeJulian grumbles about mailing-list support...
17:33 majeff1 left #gluster
18:10 brian__ joined #gluster
18:13 daMaestro joined #gluster
18:15 brian__ Hi all, I'm having some issues with creating peers. I had previously created a volume with 3 peers (node02,node03,node04).. I have since detached all of those, stopped and deleted the volume (named: gv0), and have since tried to recreate the volume. The problem is, that now even though the commnad "gluster peer status" shows" shows "No peers present", when I try to add some peers with "gluster peer probe node03" for example, i get a messag
18:15 brian__ back that says "Probe on host node03 already in peer list"… My question is, how can I get those out back to where I can re-add them? Since it shows "No peers preset", I'm not sure what I can do to fix this… Can you help please?
18:22 JoeJulian brian__: interesting. Have you tried stopping all glusterd and starting them again?
18:22 brian__ yep. but I'll try that again
18:24 brian__ one difference to note if it makes a difference. I didn't like the performance I got mounting over an ext4 fileystem, so what I did was repartitioned my brick nodes to have a dedicated xfs partition. Now I'm in the process of setting back up my volume as a distriubted volume running on a dedicated XFS partition..
18:25 brian__ is there othe was to show the peer list besides using "gluster peer status" ?
18:25 brian__ other ways*
18:26 JoeJulian no
18:27 JoeJulian If that doesn't work, stop glusterd on all your servers, then delete /var/lib/glusterd/peers from them. You can then start glusterd again. If it still thinks there are peers after that, then I'd start looking at your quantum processor because you're pulling data out of thin air. :D
18:27 brian__ lol
18:28 fps left #gluster
18:28 brian__ ok.. when I delete that file, do I need to recreate or "touch" it so it exists, or will it recreate itself?
18:29 y4m4 joined #gluster
18:31 purpleidea joined #gluster
18:31 purpleidea joined #gluster
18:34 JoeJulian It will recreate itself.
18:34 brian__ k
18:34 brian__ thx
18:38 kshlm joined #gluster
18:40 brian__ hmm still getting it after deleting the files that were in /var/lib/glusterd/peers/
18:41 brian__ any other ideas Joe?
18:42 brian__ want me to paste into fpaste.org my commands and output?
18:44 kaushal_ joined #gluster
18:55 brian__ JoeJulian: The weird part is, I can detach and add node02 all I want.. It behave just fine.. but when I try to do the same with node03 and node04… I get the message "Probe on host node03 (or node04 if I try that one) already in peer list… Here is a paste of the commands I ran: http://fpaste.org/13792/36924890/
18:55 glusterbot Title: #13792 Fedora Project Pastebin (at fpaste.org)
18:58 ofu does gluster only work on linux? Has anybody tried Solaris?
19:03 devoid joined #gluster
19:07 failshell joined #gluster
19:08 failshell hello. do you guys know of a simple web file manager i could use to front a gluster volume?
19:09 brian__ JoeJulian: ok I've noticed something different between node02 and the other two that are having trouble adding peers… node02 (after I attach with peer probe) shows a file in the /var/lib/glusterd/peers directory as file name with a UUID… when I detach node02, that file goes away (as it should).. but when I try to add node03 with peer probe, it drops a file in the /var/lib/glusterd/peers directory with the name 10.1.1.254 (instead of a
19:09 brian__ UUID), which the IP address of the head node….
19:09 JoeJulian ~hostnames | brian__
19:10 glusterbot brian__: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
19:10 JoeJulian brian__: probably that last bit...
19:10 JoeJulian ofu: I think I remember someone working on solaris, but I have no idea what the status is on that.
19:11 JoeJulian failshell: Any should work.
19:11 failshell JoeJulian: all the ones i look at kinda suck
19:12 failshell at different levels for each
19:12 rotbeard joined #gluster
19:12 failshell owncloud.org looked awesome until i saw the bugs opened in github hehe
19:12 JoeJulian Well, if you find one you like, blog about it. :D
19:13 JoeJulian For that matter, blog about the ones you don't like too.
19:16 JonnyNomad I've had good luck with owncloud lately. Just sayin'.
19:17 failshell JonnyNomad: which version? i read people lose files with it and they often end up with thousands of conflict files
19:17 failshell and there's currently almost 700 bugs opened
19:17 failshell its scary
19:17 JonnyNomad failshell: we're on 5.0.5 here. I have about 30 people using it without issue.
19:18 failshell guess ill give it a try
19:18 failshell JonnyNomad: do you use gluster with it?
19:18 JonnyNomad failshell: not yet
19:18 failshell by the way, anyone serving a git repo with gluster?
19:19 failshell going to attempt that with Gitlab soonish
19:19 failshell hopefully this week
19:19 hagarth joined #gluster
19:22 brian__ JoeJulian: ok I just noticed this… for some reason it will only let me add one peer. I just tried the other two nodes I was having problems adding to the peer list (node03 and node04), and so long as I have no other peers attached it will let me add those individually so long as they are the only peer in the list, as soon as I try to add a second peer, I get the message: Probe on host node0# port 0 is already on the peer list… I can't
19:22 brian__ more than one peer… O_o
19:23 JoeJulian brian__: This is 3.3.1?
19:23 Keawman I second owncloud it has worked great for my needs
19:23 Keawman i actually have it running in kvm against gluster backend
19:24 semiosis <3 gitlab
19:24 brian__ JoeJulian: yes…  glusterfs-3.3.1-1.el6.x86_64
19:25 semiosis i'm not running gitlab over glusterfs, but it should work fine afaict
19:25 Airbear joined #gluster
19:27 devoid joined #gluster
19:27 JoeJulian brian__: stop glusterds. Truncate the glusterd.vol.logs. remove the peers directories. start glusterd and probe from one server to two others. fpaste the glusterd.vol.log from that one server.
19:27 brian__ JoeJulian: I was able to add multiple peers before I deleted the volume and re-created everything
19:28 brian__ ok
19:32 brian__ JoeJulian: by glusterd.vol.logs, your referring to the files cli.log and etc-glusterfs-glusterd.vol.log located in /var/log/glusterfs right?
19:32 Keawman has anyone here been testing 3.4beta1?
19:33 JoeJulian brian__: Just etc-glusterfs-glusterd.vol.log but on all the servers. I probably won't care what's on the other servers, but just in case, let's start them all fresh.
19:33 brian__ k… not on the head though, right?
19:33 semiosis there is no head
19:33 JoeJulian What head?
19:33 sjoeboo joined #gluster
19:33 JoeJulian If it's on your head, that'll make it really hard to type.
19:34 brian__ hahaha .. just where I'm running commands from.
19:34 tshm_ joined #gluster
19:35 JoeJulian All. The only thing that I want done on only one server is the probe (for now).
19:37 haakon__ joined #gluster
19:37 waldner_ joined #gluster
19:37 waldner_ joined #gluster
19:38 MinhP joined #gluster
19:38 Nuxr0 joined #gluster
19:38 SteveCoo1ing joined #gluster
19:39 NeatBasis_ joined #gluster
19:41 ehg_ joined #gluster
19:42 glusterbot New news from newglusterbugs: [Bug 966207] replace-brick commit force refuses to work when it cannot resolve source brick <http://goo.gl/qfGQP>
19:46 atoponce joined #gluster
19:49 tjikkun joined #gluster
19:49 tjikkun joined #gluster
19:49 brian__ JoeJulian: Here it is… I added notes into the paste for clarity.. http://fpaste.org/13809/13692521/
19:49 glusterbot Title: #13809 Fedora Project Pastebin (at fpaste.org)
19:50 wN joined #gluster
19:51 portante joined #gluster
19:53 JoeJulian brian__: You restored a cloned image?
19:53 brian__ ACK…. yes… UUID problems??
19:53 JoeJulian yep
19:53 brian__ doh!
19:53 hjmangalam1 joined #gluster
19:54 JoeJulian Just stop glusterd on all, rm -rf /usr/lib/glusterd/*, start glusterd
19:55 brian__ ok.. so I understand… is /usr/lib/glusterd/* where the UUID's are?
19:57 JoeJulian dammit, /var not /usr
19:57 brian__ hahah whew.. i was wondering about that… had not run the command yet.. but was getting ready too
19:57 brian__ lol
19:57 JoeJulian I'm too multitasked....
19:58 brian__ definitely… but your still the wizard of gluster
19:58 brian__ :)
19:58 JoeJulian The uuid is specifically in /var/lib/glusterd/glusterd.info, but other files are in screwed up states so it'd be best to just wipe it.
19:59 brian__ so if I just blast that whole directory at /var/lib/glusterd/*… it will be recreated again when glusterd is started?
19:59 JoeJulian yes.
20:00 brian__ k cool… here goes
20:01 JoeJulian I need one of these to plug in to my brain: http://goo.gl/6llLN
20:01 glusterbot Title: Quantum Or Not, New Supercomputer Is Certainly Something Else : NPR (at goo.gl)
20:02 brian__ JoeJulian: SUCCESS!! You'da man!  lol
20:02 JoeJulian Glad I could help.
20:15 duerF joined #gluster
20:16 brian__ JoeJulian: Me too! thanks much
20:19 rwheeler joined #gluster
20:24 brian__ left #gluster
20:25 chirino joined #gluster
20:34 jbrooks joined #gluster
20:46 ricky-ticky joined #gluster
21:17 chirino joined #gluster
21:17 devoid joined #gluster
21:38 rb2k joined #gluster
21:46 brian__ joined #gluster
21:50 brian__ JoeJulian: Having another issue… and ironically I found a solution from you here -> http://joejulian.name/blog/glusterfs-path-or​-a-prefix-of-it-is-already-part-of-a-volume/  My question is: the about the trusted.glusterfs.volume-id and trusted.gfid files. Where are those files located? In the brick directories?
21:50 glusterbot <http://goo.gl/YUzrh> (at joejulian.name)
21:54 semiosis brian__: they are ,,(extended attributes)
21:54 glusterbot brian__: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://goo.gl/Bf9Er
21:56 brian__ thanks semiosis
21:56 semiosis yw
21:57 brian__ left #gluster
22:06 aliguori joined #gluster
22:13 nightwalk joined #gluster
22:13 isomorphic joined #gluster
22:53 lh joined #gluster
22:53 lh joined #gluster
23:33 lnxsix joined #gluster
23:46 jclift joined #gluster
23:52 yinyin joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary