Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-04-16

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:21 glusterbot News from newglusterbugs: [Bug 1177167] ctdb's ping_pong lock tester fails with input/output error on disperse volume mounted with glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1177167>
00:24 T3 joined #gluster
00:33 cornusammonis joined #gluster
00:33 ccha joined #gluster
00:35 lexi2 joined #gluster
00:37 Lee- joined #gluster
00:37 mattmcc joined #gluster
01:02 itisravi joined #gluster
01:15 plarsen joined #gluster
01:18 julim joined #gluster
01:31 Gill joined #gluster
01:36 gem joined #gluster
01:45 MugginsM joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 nangthang joined #gluster
01:48 nangthang joined #gluster
01:52 Lee- joined #gluster
02:15 ZakWolfinger joined #gluster
02:18 harish joined #gluster
02:21 T3 joined #gluster
02:45 Gill joined #gluster
02:51 ZakWolfinger joined #gluster
02:52 churnd joined #gluster
02:52 bharata-rao joined #gluster
02:56 _pol joined #gluster
02:57 _pol I have a server that is showing up in the peer list twice with two different uids.  What would cause that?
02:57 _pol This is after I did a crashed server replacement and re-used the uid of the crashed server.  It looks fine (all connected), but there's another uid associated with the same server that wont go away.
03:00 _pol also, that server lists that it is peered to itself...
03:00 _pol (gluster v3.6.2, centos6.6)
03:08 aravindavk joined #gluster
03:17 _pol Ok, so i deleted the /var/lib/glusterfs/peers/<uid> files for the errant uid.  It seems to be gone for now, but I wish I knew where it came from.
03:18 sripathi joined #gluster
03:28 T3 joined #gluster
03:31 churnd joined #gluster
03:45 nbalacha joined #gluster
03:52 kdhananjay joined #gluster
03:53 itisravi joined #gluster
03:54 kanagaraj joined #gluster
04:00 RameshN joined #gluster
04:07 atinmu joined #gluster
04:10 owlbot joined #gluster
04:15 spandit joined #gluster
04:16 hagarth joined #gluster
04:17 vimal joined #gluster
04:22 anoopcs joined #gluster
04:22 churnd joined #gluster
04:29 T3 joined #gluster
04:34 jiku joined #gluster
04:37 T3 joined #gluster
04:39 shubhendu joined #gluster
04:41 ppai joined #gluster
04:42 schandra joined #gluster
04:44 Manikandan joined #gluster
04:44 gem_ joined #gluster
04:49 ndarshan joined #gluster
04:50 rafi joined #gluster
04:50 cornusammonis joined #gluster
04:52 Bhaskarakiran joined #gluster
04:54 glusterbot News from resolvedglusterbugs: [Bug 1138897] NetBSD port <https://bugzilla.redhat.com/show_bug.cgi?id=1138897>
04:56 meghanam joined #gluster
04:59 jiffin joined #gluster
05:03 plarsen joined #gluster
05:05 kdhananjay joined #gluster
05:08 lalatenduM joined #gluster
05:12 aravindavk joined #gluster
05:14 Bhaskarakiran joined #gluster
05:14 soumya joined #gluster
05:15 pppp joined #gluster
05:20 hagarth joined #gluster
05:21 hgowtham joined #gluster
05:32 nishanth joined #gluster
05:35 SOLDIERz joined #gluster
05:36 bharata-rao joined #gluster
05:37 raghu joined #gluster
05:39 karnan joined #gluster
05:40 ashiq joined #gluster
05:46 Manikandan joined #gluster
05:47 ashiq- joined #gluster
05:50 nishanth joined #gluster
05:52 gem joined #gluster
06:02 kshlm joined #gluster
06:05 soumya joined #gluster
06:10 wtrac joined #gluster
06:11 wtrac Anyone got any suggestions onw hat to do if a rebalance gets "stuck" ?
06:12 kshlm joined #gluster
06:13 saurabh joined #gluster
06:15 T3 joined #gluster
06:16 ashiq joined #gluster
06:17 anil joined #gluster
06:18 wtrac volume rebalance data stop
06:18 wtrac Node Rebalanced-files          size       scanned      failures       skipped               status   run time in secs
06:18 wtrac ---------      -----------   -----------   -----------   -----------   -----------         ------------     --------------
06:18 glusterbot wtrac: -------'s karma is now -4
06:18 glusterbot wtrac: ---------'s karma is now -1
06:18 wtrac localhost                0        0Bytes             0             0             0               failed               0.00
06:18 glusterbot wtrac: ---------'s karma is now -2
06:18 wtrac gluster4.localdomain                0        0Bytes             0             0             0               failed               0.00
06:18 glusterbot wtrac: ---------'s karma is now -3
06:18 wtrac gluster2.localdomain                0        0Bytes             0             0             0               failed               0.00
06:18 glusterbot wtrac: ---------'s karma is now -4
06:18 glusterbot wtrac: ---------'s karma is now -5
06:18 wtrac gluster3.localdomain                0        0Bytes             0             0             0               failed               0.00
06:18 glusterbot wtrac: ----------'s karma is now -1
06:18 wtrac volume rebalance: data: success: rebalance process may be in the middle of a file migration.
06:18 glusterbot wtrac: ------------'s karma is now -1
06:18 wtrac The process will be fully stopped once the migration of the file is complete.
06:18 wtrac Please check rebalance process for completion before doing any further brick related tasks on the volume.
06:21 nishanth joined #gluster
06:26 nshaikh joined #gluster
06:30 pdrakeweb joined #gluster
06:30 anoopcs joined #gluster
06:31 oxae joined #gluster
06:40 rjoseph joined #gluster
06:40 kotreshhr joined #gluster
06:47 jtux joined #gluster
06:51 SOLDIERz joined #gluster
06:52 itpings hi gusys
06:52 itpings i mean guys
06:52 gem_ joined #gluster
06:53 itpings yesterday i asked one question " /mnt/gluster: Transport endpoint is not connected" error
06:53 itpings could some one explain me
07:00 ghenry joined #gluster
07:00 atinmu joined #gluster
07:05 wica itpings: That means, that the mount point can not connect to to select glusterfs server.
07:05 XpineX joined #gluster
07:06 wica itpings: Do you use something like RoundRobin to connect to the glusterfs servers?
07:16 T3 joined #gluster
07:21 Slashman joined #gluster
07:26 stickyboy joined #gluster
07:28 rjoseph joined #gluster
07:28 nbalacha joined #gluster
07:28 Bhaskarakiran joined #gluster
07:28 atinmu joined #gluster
07:28 ndarshan joined #gluster
07:28 schandra joined #gluster
07:28 hchiramm joined #gluster
07:28 karnan joined #gluster
07:28 meghanam joined #gluster
07:28 kshlm joined #gluster
07:28 spandit joined #gluster
07:28 lalatenduM joined #gluster
07:28 saurabh joined #gluster
07:28 nshaikh joined #gluster
07:28 rp_ joined #gluster
07:28 Manikandan joined #gluster
07:28 ashiq joined #gluster
07:28 soumya joined #gluster
07:28 itisravi joined #gluster
07:28 anil joined #gluster
07:28 RaSTar joined #gluster
07:28 RameshN_ joined #gluster
07:29 kanagaraj_ joined #gluster
07:29 ppai joined #gluster
07:29 hgowtham joined #gluster
07:29 shubhendu joined #gluster
07:30 deniszh joined #gluster
07:31 vikumar joined #gluster
07:31 sac joined #gluster
07:31 hagarth joined #gluster
07:31 kdhananjay joined #gluster
07:32 rafi joined #gluster
07:32 pppp joined #gluster
07:33 nishanth joined #gluster
07:36 kotreshhr joined #gluster
07:38 jiffin joined #gluster
07:40 fsimonce joined #gluster
07:46 [Enrico] joined #gluster
07:47 hagarth joined #gluster
07:48 SOLDIERz joined #gluster
07:49 Slashman joined #gluster
07:50 ctria joined #gluster
07:54 maveric_amitc_ joined #gluster
07:56 DV joined #gluster
08:01 ktosiek joined #gluster
08:08 shubhendu joined #gluster
08:12 hagarth joined #gluster
08:13 anoopcs joined #gluster
08:15 SOLDIERz joined #gluster
08:16 T3 joined #gluster
08:17 itisravi_ joined #gluster
08:18 hagarth joined #gluster
08:22 atalur joined #gluster
08:22 T0aD joined #gluster
08:27 overclk joined #gluster
08:33 itpings hi wica
08:34 itpings i have created a simple replication
08:34 wica hi itpings
08:34 itpings of two nodes
08:34 itpings i have now removed conflicting ram
08:35 itpings now checking
08:38 wica "conflicting ram" ?
08:38 itpings i thought its ram
08:38 itpings but no
08:38 itpings i never had this issue before
08:39 wica But how do you connect to our glusterfs volume?
08:39 wica your
08:39 ppai joined #gluster
08:40 wtrac Any idea how to solve a failed rebalance :( ? NFS keeps going down after too
08:40 itpings with mount -t glusterfs node1:/vol /mnt/gluster
08:40 wica itpings: So node1 is the issue
08:41 itpings no both
08:41 wica ok
08:41 wica did you do any updates?
08:44 itpings strange
08:44 itpings i think something wrong with fstab
08:44 wica on the client?
08:46 itpings why would it ask for mount point in fstab ?
08:46 itpings look at this strange error
08:46 wica can you c&p the line in fstab here?
08:46 itpings mount: can't find /mnt/gluster1/ in /etc/fstab
08:47 itpings the operation went smoothly on node2
08:47 itpings yeah sure
08:48 itpings /dev/mapper/centos-root /                       xfs     defaults        1 1
08:48 itpings UUID=e962c230-168b-4402-8efd-79eff38154a8 /boot xfs     defaults        1 2
08:48 itpings /dev/mapper/centos-home /home                   xfs     defaults        1 2
08:48 itpings /dev/mapper/centos-swap swap                    swap    defaults        0 0
08:48 itpings /dev/sdb1 /opt/datavol1/datavol1        xfs     defaults        0 0
08:49 wica Nop, no /mnt/gluster1/
08:50 itpings no i am manually adding it
08:50 itpings but if i add i add like this
08:50 itpings backup1:/datavol1       /mnt/gluster1   glusterfs       defaults 0 0
08:51 itpings and then if i try to remount manually it gives error
08:51 itpings ----------------------
08:51 itpings mount -t backup1:/datavol1 /mnt/gluster1/
08:51 glusterbot itpings: --------------------'s karma is now -1
08:51 itpings mount: unknown filesystem type 'backup1:/datavol1
08:51 itpings ty glusterbot
08:51 wica hehe,
08:51 itpings lol
08:52 wica 1 sec
08:52 itpings i am already in trouble and you just made my day gb
08:52 itpings :p
08:53 wica Sorry for asking this, but you have installed the glusterfs client on that machine?
08:53 wica and it has the same version has the glusterfs server?
08:56 itpings no client
08:56 itpings both nodes are same
08:57 itpings with replication
08:57 itpings one is backup1
08:57 itpings and other is backup2
08:57 itpings i have created guide and video on glusterfs
08:57 itpings all worked fine earlier
08:57 itpings i just ran into this situation first time
08:58 itpings i think something to do with OS
08:59 liquidat joined #gluster
09:01 itpings working
09:02 itpings again issue
09:02 itpings i think vol is corrupted
09:02 itpings need reinstall
09:03 wica :/ sorry to hear
09:03 itpings but good i saved data to other drive
09:03 itpings 3 TB of corporate data
09:04 itpings anyway
09:04 SOLDIERz joined #gluster
09:04 itpings thanks wica
09:04 anil left #gluster
09:06 sage_ joined #gluster
09:08 wica itnp
09:09 RayTrace_ joined #gluster
09:10 anrao joined #gluster
09:16 Pupeno joined #gluster
09:17 T3 joined #gluster
09:20 wtrac left #gluster
09:21 liquidat joined #gluster
09:22 ashiq joined #gluster
09:22 Jeeves_ joined #gluster
09:22 Jeeves_ Hi
09:22 glusterbot Jeeves_: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:24 Jeeves_ I have a testsetup here with three VM's with two bricks each, and I'm trying to run owncloud from it. It's extremely slow. Is this what I should expect, or am I doing something wrong?
09:24 atalur joined #gluster
09:24 Jeeves_ Obviously, the setup is suboptimal, so I don't expect blazing performance. But writing to ownloud with 15KB/sec is not what I expect. Opening a file is taking about 8 seconds.
09:24 stickyboy joined #gluster
09:25 deepakcs joined #gluster
09:26 Norky joined #gluster
09:26 wica Hi Jeeves_
09:30 Jeeves_ wica: Hi, the bot doesn't allow us to say hi ;)
09:30 wica No, and also not
09:30 wica ----------
09:30 glusterbot wica: --------'s karma is now -3
09:30 wica :)
09:30 wica Jeeves_: Can you tell a little more more about the setup?
09:31 wica and writing a file with something like scp is fast, I guess ?
09:35 ashiq joined #gluster
09:35 Jeeves_ I just switched to using an nfs mount, that seems to have helped (a lot)
09:35 jpds_ Jeeves_: o/
09:36 nshaikh joined #gluster
09:36 wica Jeeves_: That is because, with nfs the glusterfs server will replicate the data to the other node.
09:37 wica Jeeves_: When using gluster, the client will send the data to both bricks.
09:38 Jeeves_ jpds_: ?!
09:38 Jeeves_ wica: So basically, the client sucks?
09:39 wica I can not say that in this channel
09:39 Jeeves_ You can msg me
09:39 Norky that's a deliberate design decision, for resilience
09:39 Jeeves_ Norky: By make it barely usable?
09:40 Norky the only way the client can be sure it's written to all replica servers is by sending it directly to all replica servers
09:40 wica Jeeves_: It dependent on the
09:40 Norky if you're seeing NFS performa better than the native protocol, it might be for another reason
09:41 overclk joined #gluster
09:41 Norky the native protocol doesn't handle many small files well
09:41 wica Norky: NFS is working great for a large number of small files.
09:41 wica ;p
09:42 Norky this is being worked on, but yes, it's a known issue
09:44 wica Norky: yes, I see it on the mailing list. and a work around is file name hashing
09:46 Norky the hashing algo isn't a work-around to anything
09:48 Norky are you seeing the poor performance when readin, writing, or both?
09:49 Norky what exactly are you doing that is performing badly?
09:49 wica Norky: it is a work around for readdir
09:49 badone|brb joined #gluster
09:49 wica The dev's here at, like to put millions of small xml files in 1 directory
09:50 wica and are
09:50 wica and are
09:50 wica surprised that a ls -l thakes for ever
09:52 Norky glusterfs used to use just readdir, which means 2 (or 4?) packets back and forth across the network forthe metadata for EVERY SINGLE file in a directory, which compounds the network latency so that, yes, ls -l will take a very long time
09:53 glusterbot News from newglusterbugs: [Bug 1212377] "Transport Endpoint Error" seen when hot tier is unavailable <https://bugzilla.redhat.com/show_bug.cgi?id=1212377>
09:53 glusterbot News from newglusterbugs: [Bug 1212368] Data Tiering:Clear tier gfid when detach-tier takes place <https://bugzilla.redhat.com/show_bug.cgi?id=1212368>
09:54 Norky support for READDIRPLUS, which, AIUI transfers multiple file metadata in batches, was added in 2013, and improved matters a bit, but still didn't make it as fast as NFS
09:54 anrao joined #gluster
09:54 Norky assuming your servers and client have a recent kernel/FUSE, they should be able to use READDIRPLUS, but for now NFS is still faster
09:56 Norky at present, your choices are: put fewer files in more subdirectories (millions of files in one dir. is kind of a bad idea anyway, but people don't care because most file systems cope with it), or use NFS
09:56 Jeeves_ Norky: I get bad performance for both
09:56 Norky sorry, not a perfect solution
09:56 Norky btoh read and write?
09:56 Jeeves_ Yes
09:56 Jeeves_ Not if I do a dd with zeros
09:56 Jeeves_ That goes fast
09:57 Norky indeed, the I/O throughput shoudl be comparable to NFS
09:57 Jeeves_ It's not
09:57 Norky err, I was agreeing with you
09:57 Norky and now you've just disagreed with me? :)
09:58 Jeeves_ Uh? Confused! :)
09:58 Norky dd is=/dev/zero of=/glusterfs/subdir/testfile (for example) is a test of I/O throughput
09:59 Jeeves_ Yes, I know. And that's fast.
09:59 Norky so, it's "fast" but not as fast as NFS?
09:59 overclk joined #gluster
10:00 Jeeves_ I installed owncloud on the glusterfs volume, and that's terribly slow.
10:00 Jeeves_ Lots of php files
10:00 Norky I'm saying that I expect a pure I/O test, such as dd to a single file, should be comparable to NFS
10:02 Norky an operation that involves many (small) files, on the other hand, will likely significanlty slower than NFS
10:02 Norky (depending on exactly what you're doing)
10:03 Norky @php
10:03 glusterbot Norky: (#1) php calls the stat() system call for every include. This triggers a self-heal check which makes most php software slow as they include hundreds of small files. See http://joejulian.name/blog/optimizing-web-performance-with-glusterfs/ for details., or (#2) It could also be worth mounting fuse with glusterfs --attribute-timeout=HIGH --entry-timeout=HIGH --negative-timeout=HIGH
10:03 glusterbot Norky: --fopen-keep-cache
10:03 Norky follow that link
10:04 Jeeves_ Hmm, thanks. I'll have a look at that
10:04 Norky in essence, php does something stupid, but you don't normally notice it with other FSes
10:06 Jeeves_ Yeah, let's agree to disagree on your last comment. But I'll test with the settings
10:13 Jeeves_ Those settings seem to help ( a lot )
10:13 Norky jolly good
10:14 Norky if JoeJulian comes on he may have some more useful advice, he's done a lot of work (fighting) with php
10:18 ppai joined #gluster
10:18 T3 joined #gluster
10:23 glusterbot News from newglusterbugs: [Bug 1212385] Disable rpc throttling for glusterfs protocol <https://bugzilla.redhat.com/show_bug.cgi?id=1212385>
10:29 hgowtham joined #gluster
10:32 rafi1 joined #gluster
10:34 anoopcs joined #gluster
10:35 jiffin1 joined #gluster
10:38 anoopcs1 joined #gluster
10:39 jmarley joined #gluster
10:45 jiku joined #gluster
10:50 meghanam joined #gluster
10:51 soumya joined #gluster
10:51 hchiramm joined #gluster
10:53 glusterbot News from newglusterbugs: [Bug 1212398] [New] - Distribute replicate volume type is shown as Distribute Stripe in  the output of gluster volume info <volname> --xml <https://bugzilla.redhat.com/show_bug.cgi?id=1212398>
10:53 glusterbot News from newglusterbugs: [Bug 1212400] Attach tier failing and messing up vol info <https://bugzilla.redhat.com/show_bug.cgi?id=1212400>
11:01 Philambdo joined #gluster
11:14 hchiramm joined #gluster
11:18 gildub joined #gluster
11:19 T3 joined #gluster
11:21 rafi joined #gluster
11:23 anoopcs joined #gluster
11:28 SOLDIERz joined #gluster
11:29 jiku joined #gluster
11:30 ppai joined #gluster
11:30 LebedevRI joined #gluster
11:57 hchiramm joined #gluster
11:58 overclk joined #gluster
12:00 anoopcs joined #gluster
12:06 poornimag joined #gluster
12:09 soumya joined #gluster
12:11 meghanam joined #gluster
12:14 soumya joined #gluster
12:15 [Enrico] joined #gluster
12:19 T3 joined #gluster
12:21 anil joined #gluster
12:23 glusterbot News from newglusterbugs: [Bug 1212437] probing and detaching a peer generated a CRITICAL error - "Could not find peer" in glusterd logs <https://bugzilla.redhat.com/show_bug.cgi?id=1212437>
12:26 nangthang joined #gluster
12:31 firemanxbr joined #gluster
12:32 Gill joined #gluster
12:33 DV joined #gluster
12:33 anrao joined #gluster
12:34 hchiramm joined #gluster
12:36 dgandhi joined #gluster
12:36 poornimag joined #gluster
12:38 bene2 joined #gluster
12:41 cesart joined #gluster
12:42 cesart Hello!
12:42 glusterbot cesart: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:42 cesart Is this where I can get help configuring Gluster volumes and to access the data?
12:45 schwing joined #gluster
12:47 julim joined #gluster
12:51 kanagaraj joined #gluster
12:53 wkf joined #gluster
12:55 anrao joined #gluster
12:59 poornimag joined #gluster
13:04 jiffin joined #gluster
13:11 ernetas left #gluster
13:18 DV joined #gluster
13:20 hchiramm joined #gluster
13:20 T3 joined #gluster
13:20 harish_ joined #gluster
13:25 hamiller joined #gluster
13:28 georgeh-LT2 joined #gluster
13:29 T3 joined #gluster
13:40 gvandeweyer joined #gluster
13:41 gvandeweyer hi, we're having issues with a fresh gluster install. we have 6 bricks, out of which 5 are empty (previous file server converted to brick + 5 new data servers added), setup in distribute
13:42 gvandeweyer mounting on clients works, but we are missing files when doing 'ls'
13:42 gvandeweyer direct access to missing files/folders however works, so they are there
13:43 gvandeweyer any suggestions on what we might be doing wrong?
13:43 ZakWolfinger joined #gluster
13:44 Slashman joined #gluster
13:45 hagarth joined #gluster
13:45 ZakWolfinger joined #gluster
13:46 ZakWolfinger left #gluster
13:48 schwing i'm still a pretty new user, but i think the first thing i would look at is to see if those missing files are somewhere on that 1st brick.  (perhaps they didn't copy in?)  the next thing would probably be to do a rebalance so the files actually distribute across all the bricks.
13:48 gvandeweyer schwing: all files are on the first brick
13:49 gvandeweyer as in, when mounting looking at the brick itself on the server, they are there
13:50 gvandeweyer is rebalancing necessary? I'm also a new user, and before we go into production, I wanted to be sure that all clients have correct access. so if necessary, we can still shutdown gluster and reactive plain nfs of the first server alone
13:51 schwing i seem to remember reading in the docs that doing a fix-layout rebalance would let gluster know of the file structure.  i could be off with that but maybe search for that and see what you find.
13:52 jmarley joined #gluster
13:53 schwing also, looking at my notes i have one that says i found this error:  "disk layout missing" in the logs and doing the fix-layout rebalance fixed it.  so check your logs for this string
13:58 anoopcs joined #gluster
14:01 jvandewege joined #gluster
14:01 gvandeweyer schwing: thanks, I don"t see the error but the fix-layout rebalance is now taking longer than the last run (before I restarted glusterd). hopefully this will fix it
14:02 coredump joined #gluster
14:04 bene3 joined #gluster
14:05 schwing i like to tail the rebalance log file to watch what it fixes, if anything
14:08 gvandeweyer I get lots of gfid not present messages
14:10 schwing does it say anything about fixing them?  hopefully those are just information messages
14:11 gvandeweyer I think there are errors
14:11 gvandeweyer MSGID: 109036] [dht-common.c:6222:dht_log_new_layout_for_dir_selfheal] 0-gshome-dht: Setting layout of /tvandenbulcke with [Subvol_name: gshome-client-0, Err: -1 , Start: 427740003 , Stop: 2890628780 ], [Subvol_name: gshome-client-1, Err: -1 , Start: 0 , Stop: 427740002 ], [Subvol_name: gshome-client-2, Err: -1 , Start: 3183520440 , Stop: 3276346079 ], [Subvol_name: gshome-client-3, Err: -1 , Start: 3276346080 , Stop: 3862392740 ], [Subvol_name: g
14:11 soumya joined #gluster
14:11 haomaiwa_ joined #gluster
14:13 kotreshhr left #gluster
14:13 schwing right after the timestamp in the log there is a single letter code for what kind of message it is.  I=informational, W=warning, E=error.
14:14 hchiramm joined #gluster
14:14 gvandeweyer then the missing gfid entries are E errors
14:19 gvandeweyer if I do an 'ls /file' for a file listed as missing gfid, than next round of rebalance, the error for that file is fixed
14:22 schwing would that file happen to be one of the missing files from your mountpoint?  if so, is it now showing up via the mount?
14:24 gvandeweyer yes indeed. now its showing up
14:29 gvandeweyer hmm. the gfid seems to depend on the attr package, which is missing on the brickservers and clients
14:29 gvandeweyer let's install that first and see what happens
14:35 schwing i'd stop the rebalance first, too
14:35 gvandeweyer it's finished already. only took 80s or so
14:36 schwing how big is your data set?
14:36 gem_ joined #gluster
14:41 gvandeweyer 15Tb
14:51 gem_ joined #gluster
14:51 jobewan joined #gluster
14:53 gvandeweyer I'm currently running the following on the first (filled) brick, from the brick folder. the glusterfs volume is mounted on /home:
14:53 gvandeweyer find . -type d -exec ls /home/{} >/dev/null \;
14:53 gvandeweyer this seems to resolve the issue. once accessed, they appear on the nodes
14:53 ildefonso joined #gluster
14:54 schwing maybe this is part of the self-heal process?
14:55 schwing did you find an article or blog that said to try that?
14:55 gvandeweyer no, i noticed that once you accessed a file or folder listed as missing gfid directly by name, it appeared everywhere
14:55 gvandeweyer futher test showed that accessing folders whas enough
14:56 gvandeweyer is self-heal also for non-replicated distrubuted gluster volumes?
14:58 kshlm joined #gluster
14:58 schwing actually, i don't think so
14:59 Norky you'd just want a rebalance for those I think
14:59 lpabon joined #gluster
15:00 gvandeweyer Norky: will do a full rebalance once all clients can see all data.
15:01 gvandeweyer what would be the impact on performance of doing a 1 filled to 6 balanced operation?
15:02 gvandeweyer can it be done on an active production server?
15:02 jobewan joined #gluster
15:02 Norky some, and yes
15:03 Norky as to whether that impact is greater than the impact of stopping any clients accessing it, doing the rebalance and then reconnecting then clients, it depends, but doing it online is probably lower impact
15:03 Norky soryy, and open-ended answer I know
15:03 Norky an*
15:05 hagarth joined #gluster
15:06 gvandeweyer ok, thanks for the info
15:07 jiku joined #gluster
15:19 nangthang joined #gluster
15:33 hchiramm joined #gluster
15:33 anoopcs joined #gluster
15:34 kdhananjay joined #gluster
15:47 atinmu joined #gluster
16:05 cholcombe joined #gluster
16:13 vimal joined #gluster
16:15 atinmu joined #gluster
16:15 sankarshan joined #gluster
16:17 Pupeno joined #gluster
16:17 Pupeno joined #gluster
16:30 oxae joined #gluster
16:30 Le22S joined #gluster
16:31 T3 joined #gluster
16:31 T3 joined #gluster
16:34 squizzi joined #gluster
16:37 khanku joined #gluster
16:39 anoopcs joined #gluster
16:42 Arminder joined #gluster
16:45 khanku joined #gluster
16:46 RameshN_ joined #gluster
16:46 chirino joined #gluster
16:50 DV joined #gluster
17:01 poornimag joined #gluster
17:02 dtrainor joined #gluster
17:05 dtrainor morning!  I recently reinstalled a system and because I'm so smart, I copied my volume info to my Gluster volume before doing so.  Which means I can't access the volume info.  I'm not sure how to rebuild these volumes.  I don't remember what kind of configuration or order I had these disks in, so I'm hoping this information is available via xattrs
17:08 JoeJulian More or less. You can look at the trusted.afr records.
17:10 JoeJulian For instance...
17:10 JoeJulian # file: volume-d4904800-2a0e-4f49-abde-dba7f0ca4e71
17:10 JoeJulian trusted.afr.iso-client-0=0x000000000000000000000000
17:10 JoeJulian trusted.afr.iso-client-1=0x000000000000000000000000
17:11 JoeJulian That tell us that this is part of the first dht subvolume. That this is either brick 1 or brick 2 (client-0 is brick 1).
17:12 dtrainor oh, cool.
17:13 JoeJulian If they're all zeros (which they should be if the volume was healthy when it was stopped) then the order of replica pairs shouldn't matter.
17:13 dtrainor oh ok.  they'll just fall in to place?
17:13 JoeJulian As long as their done in pairs.
17:13 dtrainor sure
17:13 dtrainor reading your blog about it rihgt now in fact
17:13 JoeJulian So like brick 1 and 2 are a pair. If the xattrs are all clean, then brick2,brick1 should be safe.
17:13 JoeJulian But the dht order does matter.
17:14 dtrainor though I need to specify stripe and replica during volume creation
17:15 JoeJulian oh shit, stripe... not sure on that one.
17:15 dtrainor d'oh
17:15 dtrainor i want to say i had 2x2
17:15 JoeJulian stripe's the proverbial red headed stepchild.
17:15 dtrainor haha that bad huh
17:15 dtrainor i had 4x 2TB drives for a total of 4TB storage
17:16 dtrainor so i think stripe count 2 replica 2
17:17 JoeJulian probably still works then.
17:18 RameshN_ joined #gluster
17:19 dtrainor guess we'll find out huh?  if I assemble the volume the wrong way, is it going to destroy any data, even if I don't write anything?
17:22 DV joined #gluster
17:26 maveric_amitc_ joined #gluster
17:27 RameshN_ joined #gluster
17:32 dtrainor JoeJulian, which getfattr arguments did you use to get that info?
17:32 dtrainor I'm not seeing trusted.afr.iso-client-N
17:33 dtrainor oh, nm.  i see it.  iso-client is the vol name
17:34 Prilly joined #gluster
17:41 roost joined #gluster
17:42 squizzi joined #gluster
17:46 dtrainor volume create: fast_gv00: failed: /gluster/bricks/fast/fast_v00_b00/data or a prefix of it is already part of a volume
17:46 glusterbot dtrainor: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
17:46 dtrainor nice.
17:47 dtrainor Is this going to let me re-use the data that already exists on these bricks?
17:55 glusterbot yes, probably
17:55 dtrainor oh man.  it worked.
17:59 gnudna joined #gluster
17:59 gnudna left #gluster
18:00 gnudna joined #gluster
18:03 DV joined #gluster
18:09 Alpinist joined #gluster
18:15 atinmu joined #gluster
18:16 dtrainor well, i think that worked.
18:16 lpabon joined #gluster
18:20 dtrainor I'm not sure about that... I see this in the logs:  http://fpaste.org/212014/92083901/
18:20 dtrainor The filesystem appears to be operable otherwise, though
18:21 lalatenduM joined #gluster
18:23 chirino joined #gluster
18:25 deniszh joined #gluster
18:42 cholcombe joined #gluster
18:47 ChrisHolcombe joined #gluster
18:50 lyang0 joined #gluster
19:15 squizzi joined #gluster
19:35 gildub joined #gluster
19:46 jmarley joined #gluster
19:58 redbeard joined #gluster
20:11 Asako joined #gluster
20:13 cholcombe joined #gluster
20:18 Asako what's the procedure for cloning a gluster member?
20:20 Asako I built a new node from a snapshot and glusterd won't start
20:22 jbrooks joined #gluster
20:26 DV joined #gluster
20:32 JoeJulian Asako: wipe the state: /var/lib/glusterd
20:32 Asako ok
20:33 Asako do I need to delete .glusterfs?
20:34 JoeJulian That all depends on what you're deleting it from.
20:35 Arminder joined #gluster
20:36 Arminder joined #gluster
20:36 Asako think I got it, thanks
20:37 Arminder joined #gluster
20:38 Arminder joined #gluster
20:39 Arminder joined #gluster
20:40 Arminder joined #gluster
20:41 Arminder joined #gluster
20:42 Arminder joined #gluster
20:43 Arminder joined #gluster
20:44 Arminder joined #gluster
20:44 Arminder joined #gluster
20:45 Arminder joined #gluster
21:07 gnudna left #gluster
21:19 wkf joined #gluster
21:27 badone|brb joined #gluster
21:33 jermudgeon joined #gluster
21:53 T3 joined #gluster
22:04 plarsen joined #gluster
22:19 trig joined #gluster
22:20 plarsen joined #gluster
22:23 trig I am hoping I came to the right place, I just inherited a 2 node gluster system, no databases or anything, jsut a bunch of web content. Keep having failures on one that require a restart and really look like a network failure when clients unable to connect to the server. The problem is, I have other servers mounted to the second server in the cluster and not a single issue or log message on the second server or clients.
22:30 badone|brb joined #gluster
22:37 plarsen joined #gluster
22:44 Gill joined #gluster
22:53 T3 joined #gluster
22:58 plarsen joined #gluster
23:02 lexi2 joined #gluster
23:26 maveric_amitc_ joined #gluster
23:42 virusuy_ joined #gluster
23:54 T3 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary