Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-07-05

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 victori joined #gluster
00:28 shyam joined #gluster
00:43 victori joined #gluster
00:48 gyadav_ joined #gluster
01:18 victori joined #gluster
01:31 shyam joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:49 victori joined #gluster
02:30 gyadav_ joined #gluster
02:51 ankitr joined #gluster
03:16 kotreshhr joined #gluster
03:40 gyadav_ joined #gluster
03:42 victori joined #gluster
03:49 itisravi joined #gluster
03:50 Karan joined #gluster
03:55 nbalacha joined #gluster
03:58 jeffspeff joined #gluster
04:02 atinm joined #gluster
04:07 k0nsl joined #gluster
04:07 k0nsl joined #gluster
04:08 susant joined #gluster
04:09 buvanesh_kumar joined #gluster
04:12 rideh joined #gluster
04:19 pioto joined #gluster
04:21 apandey joined #gluster
04:21 dominicpg joined #gluster
04:29 Saravanakmr joined #gluster
04:31 Shu6h3ndu joined #gluster
04:33 msvbhat joined #gluster
04:38 skumar joined #gluster
04:41 kramdoss_ joined #gluster
04:48 ndarshan joined #gluster
04:50 gyadav_ joined #gluster
04:58 k0nsl joined #gluster
04:58 k0nsl joined #gluster
04:59 ankitr joined #gluster
05:04 ankitr joined #gluster
05:06 jiffin joined #gluster
05:06 amarts joined #gluster
05:07 rafi1 joined #gluster
05:07 itisravi joined #gluster
05:11 karthik_us joined #gluster
05:16 Saravanakmr joined #gluster
05:20 k0nsl joined #gluster
05:20 k0nsl joined #gluster
05:21 victori joined #gluster
05:24 k0nsl joined #gluster
05:24 k0nsl joined #gluster
05:45 hgowtham joined #gluster
05:45 kdhananjay joined #gluster
05:51 k0nsl joined #gluster
05:51 k0nsl joined #gluster
05:52 Prasad joined #gluster
05:55 kotreshhr joined #gluster
05:58 itisravi_ joined #gluster
06:03 Wizek_ joined #gluster
06:05 sona joined #gluster
06:10 sanoj joined #gluster
06:10 amarts joined #gluster
06:20 atinm joined #gluster
06:30 ashiq joined #gluster
06:33 kramdoss_ joined #gluster
06:44 jiffin joined #gluster
06:55 ankitr joined #gluster
06:55 kramdoss_ joined #gluster
06:56 atinm joined #gluster
06:57 Klas /var/lib/misc/glusterfsd is growing to big due to replicas, is there any good way to move it?
06:57 amarts joined #gluster
06:57 saintpablo joined #gluster
07:00 rastar joined #gluster
07:01 bartden joined #gluster
07:01 bartden hi, what is the difference between force-readdirp and readdir-ahead?
07:02 Wizek_ joined #gluster
07:05 kharloss joined #gluster
07:06 [diablo] joined #gluster
07:10 jkroon joined #gluster
07:14 psony joined #gluster
07:19 ivan_rossi joined #gluster
07:26 _KaszpiR_ joined #gluster
07:28 rafi joined #gluster
07:37 fsimonce joined #gluster
07:43 Peppard another question to the void: instead of remove-brick start in a distributed volume, would there be a problem using remove-brick force and move the files manually from the removed brick to the volume?
07:45 ndevos bartden: readdirp returns more than just the filenames in a readdir operation (like the 'stat' attributes)
07:46 ndevos bartden: readdir-ahead guesses when a readdir will be needed, and does it before the user/application calls it, so it can build caches in advance
07:55 bartden Hi ndevos thanks, i have a strage issue with gluster 3.7.5 (i know i have to upgrade). We calculate hashes before transfering and after storing on gluster. From time to time we notice that 4 bytes have been altered in the file (md5 hash inconsistency). We notice this more frequent whenever we do directory listings during storing of file. The data stays altered, it is not the case that it needs to flush the data and then the hash becomes corr
07:55 bartden When i do a direct md5 calculation on the brick itself, it gives the same results
07:58 ndevos bartden: that sounds quite worrysome, we'll need to know if that happens with a newer version too
07:58 ndevos bartden: is it possible for you to test it with a simpler, but similar setup and newer version?
07:59 bartden not at this stage since i”m off on vacation :)
07:59 ndevos ah, ok!
07:59 bartden but we are going to trace the data and try to figure out where it will be altered (client translators, server translators)
08:00 bartden i was thinking on write-behind … but this eventually flushes the data to the storage … so i guess this could not be the issue
08:00 bartden We do encrypt the bricks and transfer data in between storage and clients via IPSec, perhaps we need eliminate this at first
08:00 ndevos it indeed should flush, but that is with all the translators...
08:01 ndevos I would not expect that IPSec is a problem, but you never know
08:14 nbalacha Peppard, remove-brick force would make the files immediately inacceible
08:26 nbalacha Peppard, and you would need to run a fix-layout before copying files over
08:27 _nixpanic joined #gluster
08:27 _nixpanic joined #gluster
08:30 Limebyte joined #gluster
08:31 shruti` joined #gluster
08:33 riyas joined #gluster
08:42 _KaszpiR_ joined #gluster
08:48 itisravi joined #gluster
08:52 victori joined #gluster
08:54 gyadav joined #gluster
08:55 msvbhat joined #gluster
09:01 skumar_ joined #gluster
09:04 itisravi joined #gluster
09:04 jiffin1 joined #gluster
09:15 buvanesh_kumar_ joined #gluster
09:16 victori joined #gluster
09:21 amarts joined #gluster
09:28 atinm joined #gluster
09:30 itisravi_ joined #gluster
09:36 itisravi joined #gluster
09:40 msvbhat joined #gluster
09:44 Peppard nbalacha: thx, temporary inaccessibly would be ok, but currently the remove-brick rebalance is filling some bricks to 100%, which i totally don't like
09:45 nbalacha Peppard, which version are you running
09:45 nbalacha there was a fix for this recent;y
09:45 Peppard 3.8.12
09:45 nbalacha k
09:45 Peppard ah ok, do you know the version?
09:45 nbalacha I would need to check but the fix went in very recently
09:46 Peppard hm, so maybe i should upgrade first
09:46 amarts joined #gluster
09:47 nbalacha Peppard, let me get back to you
09:47 Peppard or try the force/move combination ;)
09:51 nbalacha Peppard, the fix is only in master at the moment
09:51 Peppard ah ok
09:51 nbalacha Peppard, so upgrade will not help
09:51 nick_g Hi guys, can someone point me to a place where I can read about each configuration from the output of "gluster volume get VOLUME-NAME all"?
09:52 nbalacha Peppard, if you do  a force, it will not free up the other bricks
09:52 nbalacha Peppard, you would need to do that manuall
09:53 Peppard "free up"? what do you mean?
09:53 nbalacha Peppard, the bricks that are at 100% will stay at 100%
09:53 Peppard oh
09:53 nbalacha as the data has already been copied
09:54 Peppard well ok, that's expected, but if i rebalance?
09:54 nbalacha Peppard, if the bricks are at 100% rebal might not work as certain fops will fail
09:55 nbalacha Peppard, how many bricks in the volume and how full are they
09:55 Peppard 8 bricks, 1 was 100%, now almost 2 at 100%
09:56 Peppard before remove-brick most of them were at 90%
09:56 Peppard but remove-brick somehow filled them
09:56 Peppard so i kinda have to manually rebalance
09:56 Peppard right?
09:56 nbalacha Peppard, do you have sufficient space on the bricks not being removed to accomodate the data on the bricks being removed?
09:57 Peppard yes, easily
09:57 nbalacha Peppard, because rebalance has to copy them somewhere
09:57 Peppard there is an almost empty brick
09:57 nbalacha Peppard, that is unfortunately not a guarantee - dht will move the files according to the hash, not the brick with the most free space
09:58 nbalacha Peppard, so you could still end up with 100% bricks
09:58 nbalacha Peppard, in your case, I think your best best is to do a remove-brick force
09:58 Peppard i though so, that's probably what's happening right now
09:58 nbalacha Peppard, what kind of volume is this? dist-rep?
09:59 susant1 joined #gluster
09:59 Peppard ok.... besides that, are there any caveats when doing manual rebalance? (moving files between bricks manually) do i have to take care of anything in the .glusterfs dir?
09:59 Peppard dist-only
10:02 nbalacha Peppard, here is what I think will work . As this is a pure dist, there is no need to check for heals etc
10:02 nbalacha Peppard, how many bricks are being removed
10:02 Peppard 1 of the 8
10:02 nbalacha susant1, can you also validate the steps I will list out
10:02 nbalacha Peppard , stop the ongoing remove-brick.
10:03 susant nbalacha: go ahead
10:03 nbalacha Peppard , do a remove-brick force. All the files on the removed brick will no longer be accessible
10:04 nbalacha Peppard, you will need to run a rebalance fix-layout on the volume. Once that is complete, you will need to copy the files from the removed brick to the volume via a mount point
10:04 nbalacha Peppard, you will need to be careful while copying to make sure you do not copy any internal files like this as it will overwrite the actual data file
10:04 nbalacha you do not need to touch the .glusterfs dir
10:05 nbalacha but whne copying files, do not copy 0 byte files whose permissions are ------T
10:05 glusterbot nbalacha: ----'s karma is now -6
10:05 Peppard (cleaning up the 0 byte link files, right?)
10:05 nbalacha those are internal gluster files
10:05 nbalacha so you will copy from the brick to the mount point any non-linkto files
10:05 Peppard i can clean them up no problem
10:05 nbalacha no need to clean them up
10:06 nbalacha just make sure you do not copy them to the mount point
10:06 Peppard ok
10:06 nbalacha and you must copying the file from the brick to the mount point
10:06 Peppard yes
10:06 nbalacha don't try to copy them directly to another brick in the backend
10:07 susant those are linkto files with permission (1000 or --------T )
10:07 glusterbot susant: ------'s karma is now -2
10:07 nbalacha Peppard, you would also need to free up some space on the 100% full bricks
10:07 nbalacha susant, what do you think? Anything else to keep in mind?
10:08 nbalacha Peppard, do you have hardlinks in your data set
10:08 susant nbalacha: seems fine
10:08 Peppard i could make a list of files on those bricks, move them of the volume and move them there again to get them "balanced"?
10:08 nbalacha susant, thanks
10:08 Peppard no hardlinks
10:08 itisravi_ joined #gluster
10:09 nbalacha Peppard, do you mean for the 100% full bricks? Yes, you could do that.
10:09 cloph nbalacha: I might be in similar situation  - so what if I had (lots of) hardlinks?
10:09 nbalacha Peppard, how much data do you have on he removed-brick
10:09 Peppard ok, seems like a good plan... thank you very much! I'd like to increase your karma ;)
10:10 nbalacha Peppard, you are welcome.
10:10 nbalacha Peppard, Let us know how it goes
10:10 Peppard the removed one lost already quite a bit, but still at about 60%
10:10 nbalacha Peppard, as in GB, TB?
10:11 nbalacha cloph, harldinks are a little trickier - you would need to copy the file and then recreate the other hardlinks
10:11 nbalacha susant, ^^ does this sound right?
10:12 psony joined #gluster
10:12 susant yes you need to list the hardlinks based on their inode number. Copy one file and create other hardlinks  on the mount point.
10:13 cloph If I would be OK with breaking the hardlink, i.e. have separate inodes, would I still have to do cleanup?
10:13 itisravi joined #gluster
10:13 Peppard the removed on is a very old 1.5TB disk, dropped from 90% to 60%, so about 500GB were moved from that disk... but overall the remove-brick rebalance moved >5TB of data among the various bricks
10:15 Peppard (in case you wonder, i combine gluster distributed with snapraid for data integrity and safety)
10:15 nick_g Is it possible to configure a glusterfs-client to unmont or mount read-only a distribute volume when one of its bricks is down?
10:16 Alghost joined #gluster
10:16 cloph nick_g: having it read-only IIRC is default if quorum is not met
10:17 cloph (although not sure whether it is a "proper" read-only or just a "failure when attempting to write"
10:23 nick_g if I do understand how glusterfs works, "Server quorum is a feature intended to reduce the occurrence of "split brain", and "Split brain is a situation where two or more replicated copies of a file become divergent.". We have a volume of type "distributed" so no split-brain is possible. I just want to instruct the glusterfs-client to unmount if any brick of the volume gets down. Is that possible?
10:23 nick_g cloph: sorry, should have noted you in my previous message ;)
10:24 nbalacha nick_g, no, that is not possible for pure dist volumes
10:24 nbalacha nick_g, you will not be able to access files on that brick and certain dir ops may fail
10:26 ksandha_ joined #gluster
10:26 cloph nick_g: note that there are two levels of quorum - the servers/peers in the cluster and the bricks making up a volume
10:26 cloph and yes, for distributed volumes → if the brick is down, so are the files.
10:31 nbalacha Peppard, please note it will be safest to do this when there are no client accessing the volume
10:31 apandey_ joined #gluster
10:31 nbalacha or at least not accessing the data on the removed brick
10:33 psony joined #gluster
10:33 Peppard i'll close access to the volume until the operation in finished
10:36 WebertRLZ joined #gluster
10:43 WebertRLZ joined #gluster
10:43 kotreshhr joined #gluster
10:45 kotreshhr left #gluster
10:45 karthik_us joined #gluster
10:50 victori joined #gluster
10:52 susant joined #gluster
10:56 shyam joined #gluster
10:58 Gambit15 joined #gluster
10:59 susant joined #gluster
11:01 georgeangel[m] joined #gluster
11:06 Karan joined #gluster
11:11 msvbhat joined #gluster
11:11 kramdoss_ joined #gluster
11:13 rafi2 joined #gluster
11:23 baber joined #gluster
11:31 skumar__ joined #gluster
11:33 WebertRLZ joined #gluster
11:41 apandey__ joined #gluster
11:44 jiffin1 joined #gluster
11:46 victori joined #gluster
11:57 ksandha_ joined #gluster
11:57 kotreshhr joined #gluster
11:58 kramdoss_ joined #gluster
12:11 Gambit15 joined #gluster
12:13 Gambit15_ joined #gluster
12:19 kramdoss_ joined #gluster
12:20 susant joined #gluster
12:25 nick_g cloph: thank you!
12:26 nick_g nbalacha: thank you too!
12:26 nick_g wish you all the best guys. May a light shines above this community!
12:31 Karan joined #gluster
12:39 susant joined #gluster
12:40 prasanth joined #gluster
12:51 paulds left #gluster
12:54 buvanesh_kumar joined #gluster
13:00 kramdoss_ joined #gluster
13:01 hemangpatel joined #gluster
13:08 vbellur joined #gluster
13:17 WebertRLZ joined #gluster
13:18 hemangpatel left #gluster
13:20 kramdoss_ joined #gluster
13:29 skylar joined #gluster
13:43 shyam joined #gluster
13:45 marlinc joined #gluster
13:48 plarsen joined #gluster
13:54 kramdoss_ joined #gluster
14:02 saintpablos joined #gluster
14:04 amarts joined #gluster
14:04 saintpabloss joined #gluster
14:06 saintpablo joined #gluster
14:13 kramdoss_ joined #gluster
14:30 marlinc joined #gluster
14:37 skylar joined #gluster
14:43 ankitr joined #gluster
14:49 nbalacha joined #gluster
14:50 jstrunk joined #gluster
14:52 susant left #gluster
14:53 aronnax joined #gluster
14:54 msvbhat joined #gluster
14:55 kshlm Gluster Community Meeting starts in 5 minutes in #gluster-meeting
14:59 jiffin joined #gluster
15:00 kotreshhr left #gluster
15:05 kshlm Community Meeting has started in #gluster-meeting
15:09 gyadav joined #gluster
15:11 kpease joined #gluster
15:12 ninjaryan joined #gluster
15:20 baber joined #gluster
15:22 buvanesh_kumar joined #gluster
15:23 rastar joined #gluster
15:53 ninjaryan joined #gluster
16:07 skylar joined #gluster
16:26 susant joined #gluster
16:27 shaunm joined #gluster
16:29 Gambit15 joined #gluster
16:29 Gambit15_ joined #gluster
16:42 jiffin joined #gluster
16:51 Shu6h3ndu joined #gluster
16:52 ninjaryan joined #gluster
16:53 armyriad joined #gluster
16:56 ankitr joined #gluster
16:59 ninjaryan joined #gluster
17:01 pioto joined #gluster
17:02 baber joined #gluster
17:07 ninjaryan joined #gluster
17:23 ivan_rossi left #gluster
17:33 skylar joined #gluster
17:49 sona joined #gluster
18:07 riyas joined #gluster
18:10 MrAbaddon joined #gluster
18:17 cliluw joined #gluster
18:18 cliluw joined #gluster
18:24 skylar joined #gluster
18:30 vbellur joined #gluster
18:37 elico joined #gluster
19:11 rafi joined #gluster
19:16 vbellur joined #gluster
19:19 rafi1 joined #gluster
19:35 ogelpre joined #gluster
19:38 ogelpre does gluster fit as network file system for /home of ~10 desktops and ~ 150 servers?
19:41 victori joined #gluster
19:47 ogelpre is the latency good enough for Desktop Software like Browsers?
20:08 vbellur joined #gluster
20:08 shyam joined #gluster
20:08 vbellur joined #gluster
20:09 kharloss joined #gluster
20:10 vbellur joined #gluster
20:10 vbellur joined #gluster
20:16 shyam joined #gluster
20:34 jkroon joined #gluster
20:50 victori joined #gluster
20:56 vbellur joined #gluster
20:57 vbellur joined #gluster
20:58 vbellur joined #gluster
20:59 vbellur joined #gluster
21:23 shyam joined #gluster
21:43 shyam1 joined #gluster
21:57 victori joined #gluster
22:12 shyam joined #gluster
22:20 arpu joined #gluster
22:21 shyam joined #gluster
22:29 marlinc joined #gluster
22:44 Alghost joined #gluster
22:45 Alghost joined #gluster
23:20 victori joined #gluster
23:40 Alghost_ joined #gluster
23:41 Alghost joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary