Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-07-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 rafi joined #gluster
00:01 rafi joined #gluster
00:09 Javezim joined #gluster
00:11 samikshan joined #gluster
00:12 Javezim Hey All, I'm having an issue where the Gluster Bricks logs are getting extremely large very quickly across my sites.
00:12 Javezim In the Gluster Options I do have diagnostics.client-log-level: WARNING
00:12 Javezim diagnostics.brick-log-level: WARNING
00:13 Javezim When I check the Warnings/Errors in the /var/log/glusterfs/bricks all I see is http://paste.ubuntu.com/18480133/
00:13 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
00:13 Javezim Any idea what these warnings are?
00:16 telmich joined #gluster
00:42 shdeng joined #gluster
01:41 Lee1092 joined #gluster
02:17 poornimag joined #gluster
02:35 wnlx joined #gluster
03:13 harish_ joined #gluster
03:18 wnlx joined #gluster
03:19 RameshN joined #gluster
03:30 magrawal joined #gluster
03:32 sanoj joined #gluster
03:42 sakshi joined #gluster
03:45 kramdoss_ joined #gluster
03:50 hgowtham joined #gluster
03:50 sanoj joined #gluster
03:50 Manikandan joined #gluster
03:51 atinm joined #gluster
04:00 Gnomethrower joined #gluster
04:03 itisravi joined #gluster
04:04 itisravi_ joined #gluster
04:09 harish joined #gluster
04:22 shubhendu joined #gluster
04:27 Muthu_ joined #gluster
04:27 suliba joined #gluster
04:40 ndarshan joined #gluster
04:41 shubhendu joined #gluster
04:41 nehar joined #gluster
04:55 shubhendu joined #gluster
04:57 kramdoss_ joined #gluster
05:04 poornimag joined #gluster
05:11 gowtham joined #gluster
05:14 ppai joined #gluster
05:15 itisravi_ joined #gluster
05:18 prasanth_ joined #gluster
05:21 aspandey joined #gluster
05:25 Bhaskarakiran joined #gluster
05:28 ahino joined #gluster
05:30 msvbhat joined #gluster
05:36 anil joined #gluster
05:37 Javezim Anyone know where the function is that Zips up the /var/log/glusterfs/bricks files every day?
05:38 devyani7_ joined #gluster
05:39 Apeksha joined #gluster
05:40 nishanth joined #gluster
05:44 karthik___ joined #gluster
05:45 [diablo] joined #gluster
05:47 Muthu__ joined #gluster
05:49 atalur joined #gluster
05:50 karnan joined #gluster
05:52 satya4ever joined #gluster
05:55 pur joined #gluster
06:03 skoduri joined #gluster
06:09 MikeLupe joined #gluster
06:12 kshlm joined #gluster
06:15 jtux joined #gluster
06:15 kdhananjay joined #gluster
06:15 tg2 joined #gluster
06:16 itisravi joined #gluster
06:22 hchiramm joined #gluster
06:24 ppai joined #gluster
06:26 hgowtham joined #gluster
06:30 tg2 joined #gluster
06:31 pdrakeweb joined #gluster
06:36 jri joined #gluster
06:45 aspandey joined #gluster
06:48 robb_nl joined #gluster
06:51 sakshi joined #gluster
06:57 kramdoss_ joined #gluster
07:09 jwd joined #gluster
07:10 plarsen joined #gluster
07:14 pg joined #gluster
07:18 [Enrico] joined #gluster
07:19 Ramereth joined #gluster
07:23 rastar joined #gluster
07:23 rafi joined #gluster
07:43 Klas Hrm, I'm currently seeing a couple of behaviours in gluster which seem uncommon and problematic
07:44 Klas 1: peers often show disconnected status in an unpredictable fashion
07:44 Klas restarting services tend to help
07:44 Klas 2: mounts seem to not be able to recover from read-only without unmount/dismount
07:46 rafi joined #gluster
07:51 karthik___ joined #gluster
07:52 ppai joined #gluster
07:59 Klas the client seems to lock up the node it was mounted towards so that it behaves strangely somehow?
07:59 rafi joined #gluster
08:01 hackman joined #gluster
08:07 sanoj joined #gluster
08:09 rafi joined #gluster
08:10 Slashman joined #gluster
08:12 msvbhat joined #gluster
08:14 ivan_rossi joined #gluster
08:20 Saravanakmr joined #gluster
08:22 ppai joined #gluster
08:22 Seth_Karlo joined #gluster
08:25 ashiq joined #gluster
08:33 archit_ joined #gluster
08:34 arcolife joined #gluster
08:38 harish_ joined #gluster
08:39 Wizek joined #gluster
08:41 Muthu_ joined #gluster
08:41 Muthu__ joined #gluster
08:42 ppai joined #gluster
08:43 muneerse joined #gluster
08:48 ramky joined #gluster
09:03 kdhananjay joined #gluster
09:16 ppai joined #gluster
09:18 Alghost_ joined #gluster
09:24 skoduri joined #gluster
09:38 rastar joined #gluster
09:38 rafi joined #gluster
09:44 msvbhat joined #gluster
09:49 kramdoss_ joined #gluster
09:50 arif-ali joined #gluster
10:02 skoduri left #gluster
10:02 skoduri joined #gluster
10:10 ppai joined #gluster
10:16 itisravi joined #gluster
10:23 foster joined #gluster
10:26 jwd joined #gluster
10:35 surabhi joined #gluster
10:36 surabhi joined #gluster
10:42 kramdoss_ joined #gluster
10:44 sanoj joined #gluster
10:47 BT_ joined #gluster
10:53 HoloIRCUser1 joined #gluster
10:57 XpineX joined #gluster
11:12 HoloIRCUser4 joined #gluster
11:13 johnmilton joined #gluster
11:14 Klas I restarted my efforts on that cluster and it has been removed
11:14 Klas anyone successfully used backup-volfile-servers?
11:15 Klas https://bugzilla.redhat.co​m/show_bug.cgi?id=1222678 Seems to indicate that it's broken, which matches what I'm experiencing
11:15 glusterbot Bug 1222678: high, unspecified, ---, bugs, NEW , backupvolfile-server, backup-volfile-servers options in /etc/fstab / list of volfile-server options on command line ignored when mounting
11:18 Alghost joined #gluster
11:21 Alghost_ joined #gluster
11:23 sunnikri joined #gluster
11:36 ppai joined #gluster
11:37 shubhendu joined #gluster
11:41 HoloIRCUser4 joined #gluster
11:43 sanoj joined #gluster
11:45 ira joined #gluster
11:48 the-me joined #gluster
11:55 Saravanakmr #REMINDER: Gluster Community Bug Triage meeting at 12:00 UTC #gluster-meeting
11:59 Muthu__ joined #gluster
11:59 Muthu_ joined #gluster
12:10 HoloIRCUser1 joined #gluster
12:10 HoloIRCUser4 joined #gluster
12:11 ashiq joined #gluster
12:13 msvbhat joined #gluster
12:17 cloph Hi*, need help reading logs/finding cause of geo-replication problem. In slave log I have
12:17 cloph [2016-07-05 12:11:45.600752] W [fuse-bridge.c:1999:fuse_create_cbk] 0-glusterfs-fuse: 3904818: /.gfid/40981c09-dbb8-42a7-ac9d-798132876981 => -1 (Operation not permitted)
12:17 cloph on master it is
12:17 cloph [2016-07-03 11:36:12.627445] E [master(/srv/backup/gluster):784:log_failures] _GMaster: ENTRY FAILED: ({'gfid': '40981c09-dbb8-42a7-ac9d-798132876981', 'entry': '.gfid/9bd9f8c9-fdd9-4dda-ad67-d53a39629​e88/ftp.de.debian.org_debian_dists_jessi​e-backports_main_binary-amd64_Packages', 'stat': {'atime': 1467512775.8882499, 'gid': 0, 'mtime': 1467430153.0, 'mode': 33188, 'uid': 0}, 'op': 'LINK'}, 17, '79b1a849-8aff-42c1-8332-26105409580c')
12:19 poornimag joined #gluster
12:19 pg joined #gluster
12:21 unclemarc joined #gluster
12:21 karnan joined #gluster
12:31 kpease joined #gluster
12:36 guhcampos joined #gluster
12:38 B21956 joined #gluster
12:39 HoloIRCUser6 joined #gluster
12:43 chirino joined #gluster
12:45 B21956 joined #gluster
12:53 hi11111 joined #gluster
12:55 atinm joined #gluster
12:55 hchiramm joined #gluster
13:04 ben453 joined #gluster
13:05 julim joined #gluster
13:17 hchiramm_ joined #gluster
13:24 nehar joined #gluster
13:25 Seth_Karlo joined #gluster
13:29 Muthu_ joined #gluster
13:30 Muthu__ joined #gluster
13:38 rwheeler joined #gluster
13:40 skoduri joined #gluster
13:41 Wizek joined #gluster
13:45 atinm joined #gluster
13:58 dnunez joined #gluster
13:59 shyam joined #gluster
14:01 chirino joined #gluster
14:04 bowhunter joined #gluster
14:10 jiffin joined #gluster
14:11 rafi joined #gluster
14:11 jiffin1 joined #gluster
14:12 kpease joined #gluster
14:14 karnan joined #gluster
14:14 rafi joined #gluster
14:15 squizzi joined #gluster
14:15 karnan joined #gluster
14:23 jiffin1 joined #gluster
14:26 rafi joined #gluster
14:38 dnunez joined #gluster
14:39 rwheeler joined #gluster
14:42 Bhaskarakiran joined #gluster
14:47 jiffin1 joined #gluster
14:50 jiffin joined #gluster
15:01 msvbhat joined #gluster
15:05 dnunez joined #gluster
15:05 wushudoin| joined #gluster
15:09 lpabon joined #gluster
15:13 cholcombe joined #gluster
15:18 guhcampos joined #gluster
15:20 unforgiven512 joined #gluster
15:21 unforgiven512 joined #gluster
15:21 unforgiven512 joined #gluster
15:22 unforgiven512 joined #gluster
15:23 unforgiven512 joined #gluster
15:23 shubhendu joined #gluster
15:23 unforgiven512 joined #gluster
15:24 unforgiven512 joined #gluster
15:25 unforgiven512 joined #gluster
15:28 unforgiven512 joined #gluster
15:28 unforgiven512 joined #gluster
15:29 unforgiven512 joined #gluster
15:30 unforgiven512 joined #gluster
15:31 unforgiven512 joined #gluster
15:34 HoloIRCUser6 joined #gluster
15:34 unforgiven512 joined #gluster
15:35 DV joined #gluster
15:36 gnulnx So I'm having an unexpected issue with a distributed volume:  With two bricks, brick1 has 21.7T usage, brick2 has 2.94T usage.  I've ran the rebalance commands twice.  What else could be up?
15:36 unforgiven512 joined #gluster
15:36 gnulnx Probably worth noting that I created this volume with a single brick to test with, and then latter added brick2.  I've since ran the fix-layout command (as well as rebalance)
15:36 unforgiven512 joined #gluster
15:37 unforgiven512 joined #gluster
15:38 unforgiven512 joined #gluster
15:38 unforgiven512 joined #gluster
15:45 kshlm joined #gluster
15:47 ivan_rossi left #gluster
15:49 Apeksha joined #gluster
15:51 kpease_ joined #gluster
15:55 gnulnx Just did a simple test:  Created a new distributed volume with 1 brick.  Started and mounted it.  Created 10 test files under the mount point.  Verified that the 10 files exist under the brick dir, and the mount dir.  Added a second brick (on the second server), ran the rebalance, and none of the files are transfered over
15:57 Seth_Karlo joined #gluster
15:58 cloph gnulnx: http://www.gluster.org/community/docum​entation/index.php/Gluster_3.2:_Rebala​ncing_Volume_to_Migrate_Existing_Data
15:59 gnulnx cloph: I did 'gluster volume rebalance test start' which the docs says is supposed to both fix the layout, and migrate data, no?
15:59 gnulnx http://www.gluster.org/community/documentat​ion/index.php/Gluster_3.2:_Rebalancing_Volu​me_to_Fix_Layout_and_Migrate_Existing_Data
16:00 jiffin1 joined #gluster
16:00 gnulnx OK, performed the same test, but with both bricks on the same server, and the rebalance worked.
16:05 jiffin joined #gluster
16:10 cloph no idea, I'm enduser only, thought that explicitly triggering the migrate might help.. Do new files get distributed across the bricks as expected?
16:13 BTT joined #gluster
16:17 baojg joined #gluster
16:17 kpease joined #gluster
16:18 dlambrig_ joined #gluster
16:23 kpease joined #gluster
16:24 gnulnx cloph: Negative, they do not.
16:25 gnulnx However...  If I create the volume with both bricks originally, then files get distributed as expected.
16:30 wadeholler joined #gluster
16:31 jiffin1 joined #gluster
16:31 karnan joined #gluster
16:31 ghenry joined #gluster
16:31 ghenry joined #gluster
16:31 cloph then it isn't added properly it seems - closely examine volume info / status
16:35 bowhunter joined #gluster
16:37 BTT left #gluster
16:54 Wizek joined #gluster
17:00 jbrooks joined #gluster
17:23 mdavidson joined #gluster
17:26 squizzi joined #gluster
17:47 baojg joined #gluster
17:53 rwheeler joined #gluster
17:57 B21956 joined #gluster
18:02 jri joined #gluster
18:11 shubhendu joined #gluster
18:22 jiffin joined #gluster
18:26 ashiq joined #gluster
18:27 jiffin joined #gluster
18:28 jiffin1 joined #gluster
18:32 cliluw joined #gluster
18:33 jiffin1 joined #gluster
18:41 jiffin1 joined #gluster
18:42 jri joined #gluster
18:46 jiffin joined #gluster
18:48 jiffin1 joined #gluster
18:49 baojg joined #gluster
18:56 jiffin joined #gluster
18:58 jiffin1 joined #gluster
18:58 ahino joined #gluster
19:00 arcolife joined #gluster
19:04 jwd joined #gluster
19:06 jiffin1 joined #gluster
19:09 wadeholler hi all:  I think I am experiencing the same error described here: http://serverfault.com/questions/78260​2/glusterfs-rebalancing-volume-failed with 3.7.11, 3.7.12, and 3.8.0
19:09 glusterbot Title: GlusterFS rebalancing volume failed - Server Fault (at serverfault.com)
19:09 wadeholler could someone help / point me in a direction ?
19:12 jiffin joined #gluster
19:15 post-factum wadeholler: ye
19:15 post-factum wadeholler: you install debug symbols, get core file, and then get stacktrace
19:15 post-factum wadeholler: then, write detailed description of what you did to trigger the crash to ML
19:16 jiffin1 joined #gluster
19:16 wadeholler post-factum: ok. I will start down that path.  Is there a guide somewhere for doing those steps ?  If not I'll figure it out / google it.
19:17 post-factum wadeholler: not sure. try to search gluster doc, but debugging is pretty common thing to google
19:17 wadeholler post-factum:  ok. thank you.
19:18 post-factum like 'gdb /usr/sbin/glusterfs /path/to/core/file' then 'bt' <enter>
19:21 jiffin joined #gluster
19:23 jiffin1 joined #gluster
19:28 jiffin1 joined #gluster
19:32 johnmilton joined #gluster
19:34 jiffin joined #gluster
19:40 kpease joined #gluster
19:45 julim joined #gluster
19:49 hackman joined #gluster
19:50 johnmilton joined #gluster
19:50 baojg joined #gluster
19:55 F2Knight_ joined #gluster
19:57 bluenemo joined #gluster
20:12 jwd joined #gluster
20:15 Wizek_ joined #gluster
20:16 wadeholler joined #gluster
20:18 wadeholler joined #gluster
20:24 F2Knight joined #gluster
20:40 F2Knight joined #gluster
20:51 baojg joined #gluster
20:57 edong23 joined #gluster
21:16 HoloIRCUser joined #gluster
21:19 F2Knight joined #gluster
21:26 F2Knight joined #gluster
21:28 pampan joined #gluster
21:31 F2Knight joined #gluster
21:34 F2Knight joined #gluster
21:41 kpease joined #gluster
21:45 pampan Hi guys! I have a question regarding gluster behavour. Let's say that I have a cluster with three peers, but clients have only two of those three peers configured as remotes. Will that cause any problems? I'm asking because I'm not seeing that the files deletion are being sent to the non used peer. I assumed that, even if it's not used by the clients, the others peers will be in charge of keeping the
21:45 pampan non-used peer in sync.
21:45 pampan I'm using 3.5.7, by the way.
21:45 HoloIRCUser joined #gluster
21:45 HoloIRCUser3 joined #gluster
21:46 cloph how can you  only configure two of the peers?
21:47 cloph each peer in the cluster knows all peers, so having one not knowing of the other cannot happen really. You can make it so there is no route to it, but that of course is pretty pointless to do in the first place..
21:48 pampan Hi cloph, thanks for answering. Sure, I'm speaking client side.
21:48 pampan volume replicate
21:49 pampan type cluster/replicate
21:49 pampan subvolumes  remote1  remote2 remote3
21:49 pampan end-volume
21:49 pampan For example, by not defining remote3
21:51 F2Knight joined #gluster
21:51 cloph still don't understand the question. you mean you have one peer in the cluster that is not used as a brick at all?
21:52 baojg joined #gluster
21:53 pampan That's the case, yes, it's a new peer and I was waiting for it to be in sync before putting as a remote on the clients
21:53 pampan but the sync is not happening, that's why I'm asking about gluster behavour
22:00 cloph I still don't understand your use of the term client. Either it is part of the volume or not either it should sync data to the brick or not. whether you mount the volume on a client or not is independent from the self-heal...
22:00 F2Knight joined #gluster
22:03 pampan by client I mean anything that mount the gluster volume, as in mount -t glusterfs <.vol file> <mount point>
22:04 cloph those client mounts don't care what goes on behind the scenes, so as long as quorum is met, they will work, no matter whether data still needs to be synced behind the scenes.
22:04 pampan that's the idea I have about how glusterfs works
22:05 cloph attempting to access a file will have gluster check whether the state is sane, and if necessary trigger heal for the file
22:06 pampan but it happens to me that the new replica will only start synching after I mount it... and it will only replicate the files that I had 'stat'ed
22:06 pampan right, so, until that file is not accessed, the file won't be replicated?
22:07 cloph you can force that process
22:07 pampan I've tried with glfsheal and with "gluster volume heal <volume>"
22:08 pampan with the "full" options and whatever, but without success
22:08 pampan the replica will not sync
22:08 cloph volume heal volname full
22:09 pampan it does not work for me
22:09 cloph yeah. that to my understanding also just does a "find <volume> -print0 |xargs -0 stat" or something simliar (i.e. accesses every file)
22:11 pampan doesn't make sense to me if it does that, because it does it on the server brick, right?
22:11 pampan afaik the stat should be done on a mount
22:11 pampan a gluster mount
22:12 cloph yes, if you would run the find
22:13 cloph that was meant as equivalent of what the heal full command would do.  If you do the "find.. stat" method, then it won't matter on which client mount you would use this on.
22:14 HoloIRCUser joined #gluster
22:15 HoloIRCUser3 joined #gluster
22:21 pampan I don't get, then, why the replica that is not being used by any mount is not deleting the files that are being removed in the other replcias
22:25 F2Knight joined #gluster
22:27 jbrooks joined #gluster
22:44 HoloIRCUser4 joined #gluster
22:44 HoloIRCUser3 joined #gluster
22:46 pampan so, the replica that I had, the one that wasn't deleting the files, it's now replicating those files to the other replicas
22:46 kpease joined #gluster
22:52 baojg joined #gluster
23:02 kpease joined #gluster
23:13 HoloIRCUser4 joined #gluster
23:13 HoloIRCUser3 joined #gluster
23:34 Klas joined #gluster
23:42 HoloIRCUser joined #gluster
23:43 HoloIRCUser3 joined #gluster
23:53 baojg joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary