Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-03-24

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 mowntan joined #gluster
00:00 mowntan joined #gluster
00:00 SpeeR dmesg | grep -i fuse
00:00 SpeeR fuse init (API version 7.14)
00:01 haomaiwa_ joined #gluster
00:02 gbox SpeeR: glusterfs-fuse is also installed?
00:03 alghost joined #gluster
00:03 SpeeR I just yum removed the gluster*, and did a reinstall...
00:04 SpeeR this time it installed 2 dependancies
00:04 SpeeR and I think solved my problem
00:04 SpeeR all the binaries are installed now
00:04 SpeeR thanks gbox
00:05 gbox SpeeR: Sure.  The software equivalent of unplugging it and plugging it back in.
00:05 SpeeR haha yeh, well I was installing from salt... so I guess I need to add those prereq's into the state
00:05 SpeeR I thought it would have failed to install or auto install the prereq's but I guess I was wrong
00:07 gbox SpeeR: via David Wheeler: "Any problem in computer science can be solved with another layer of indirection. But that usually will create another problem"
00:07 misc joined #gluster
00:12 DV joined #gluster
00:19 misc joined #gluster
00:21 luizcpg joined #gluster
00:35 harish_ joined #gluster
00:41 F2Knight joined #gluster
00:46 misc joined #gluster
00:58 Neilo joined #gluster
01:00 RameshN joined #gluster
01:01 haomaiwa_ joined #gluster
01:12 misc joined #gluster
01:30 misc joined #gluster
01:33 d4n13L joined #gluster
01:46 baojg joined #gluster
01:47 nathwill joined #gluster
01:55 atalur joined #gluster
01:55 Neilo Just wondering if with gluster 3.6, any way to setup a few servers that allow 'faster' uploads? We're looking into using a couple of servers running SSDs, to upload data more quickly. Would we need a separate volume just on those faster nodes?
01:56 lanning yes
01:56 calavera joined #gluster
02:04 EinstCrazy joined #gluster
02:10 luizcpg joined #gluster
02:21 plarsen joined #gluster
02:28 kaushal_ joined #gluster
02:36 amye joined #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:51 ShwethaHP_ joined #gluster
02:51 ShwethaHP_ left #gluster
02:51 ShwethaHP joined #gluster
02:54 dtrainor joined #gluster
02:55 dtrainor hi.  i'm trying to run 'make' on a volume that's mounted as glusterfs, and I keep running in to this:  http://stackoverflow.com/questions/2438890/cc1plus-error-include-value-too-large-for-defined-data-type-when-compiling-wi/2496749
02:55 glusterbot Title: visual studio - cc1plus: error: include: Value too large for defined data type when compiling with g++ - Stack Overflow (at stackoverflow.com)
02:55 dtrainor It looks like you can solve this using cifs by setting noserverino as a mount option.  Is there a Gluster equivalent that I haven't found yet?
02:58 dtrainor I just mounted the volume via nfs as a test but still ran in to this.
03:02 dtrainor hmm.. setting the nfs.enable-ino32 attribute on the volume seemed to help.  i'll test this using the gluster client shortly
03:06 Lee1092 joined #gluster
03:14 calavera joined #gluster
03:16 nehar joined #gluster
03:21 ovaistariq joined #gluster
03:21 atalur joined #gluster
03:29 haomaiwa_ joined #gluster
03:44 kanagaraj joined #gluster
03:47 DV joined #gluster
03:48 shubhendu joined #gluster
03:49 kdhananjay joined #gluster
03:58 nbalacha joined #gluster
03:59 atinm joined #gluster
04:01 haomaiwa_ joined #gluster
04:02 itisravi joined #gluster
04:06 RameshN joined #gluster
04:06 vmallika joined #gluster
04:08 gem joined #gluster
04:21 ovaistariq joined #gluster
04:25 vmallika joined #gluster
04:31 camg joined #gluster
04:32 aravindavk joined #gluster
04:34 gem_ joined #gluster
04:41 ramky joined #gluster
04:43 hgowtham joined #gluster
04:58 gowtham joined #gluster
05:01 haomaiwa_ joined #gluster
05:04 EinstCrazy joined #gluster
05:05 prasanth joined #gluster
05:05 nishanth joined #gluster
05:06 camg joined #gluster
05:09 aravindavk joined #gluster
05:09 Apeksha joined #gluster
05:11 hchiramm joined #gluster
05:12 poornimag joined #gluster
05:15 ovaistariq joined #gluster
05:17 DV joined #gluster
05:22 pgreg joined #gluster
05:23 pgreg joined #gluster
05:32 Bhaskarakiran joined #gluster
05:41 karthik___ joined #gluster
05:46 kdhananjay joined #gluster
05:48 rafi joined #gluster
05:50 ovaistariq joined #gluster
05:51 ovaistar_ joined #gluster
05:52 Gnomethrower joined #gluster
05:53 Saravanakmr joined #gluster
05:55 Manikandan joined #gluster
06:01 haomaiwa_ joined #gluster
06:02 rastar joined #gluster
06:05 ramky joined #gluster
06:05 Manikandan joined #gluster
06:06 skoduri joined #gluster
06:13 Manikandan joined #gluster
06:14 ShwethaHP joined #gluster
06:15 ShwethaHP left #gluster
06:15 sabansal_ joined #gluster
06:19 kshlm joined #gluster
06:22 Gnomethrower joined #gluster
06:27 baojg joined #gluster
06:32 skoduri joined #gluster
06:32 karnan joined #gluster
06:37 pgreg joined #gluster
06:38 ovaistariq joined #gluster
06:40 liibert joined #gluster
06:41 gem joined #gluster
06:47 baojg joined #gluster
06:49 prasanth joined #gluster
06:53 vmallika joined #gluster
06:55 karnan joined #gluster
06:57 Gnomethrower joined #gluster
07:01 haomaiwang joined #gluster
07:08 baojg joined #gluster
07:09 mhulsman joined #gluster
07:15 ramky joined #gluster
07:16 pgreg joined #gluster
07:22 jtux joined #gluster
07:30 post-factum itisravi: are you here?
07:31 harish_ joined #gluster
07:54 Gnomethrower joined #gluster
08:01 haomaiwa_ joined #gluster
08:05 drankis joined #gluster
08:11 ovaistariq joined #gluster
08:15 itisravi post-factum: yup tell me
08:17 post-factum itisravi: regarding this one: https://bugzilla.redhat.com/show_bug.cgi?id=1309462
08:17 glusterbot Bug 1309462: low, unspecified, ---, ravishankar, MODIFIED , Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance.  Fresh install of 3.7.8 also has low write performance
08:17 post-factum test setup: 3.7.9, 4 bricks, distributed-replicated 2×2
08:17 post-factum local mount: over 250 MB/s
08:17 jri joined #gluster
08:18 post-factum 10GbE mount: ~120 MB/s
08:18 post-factum 1G mount: 40 MB/s
08:18 post-factum any thoughts?
08:19 post-factum all 4 bricks are within 1 node, "local mount" refers to fuse mount at that node
08:19 post-factum with 3.7.6 it was up to 60 MB/s on 1GbE connection
08:19 Slashman joined #gluster
08:20 itisravi post-factum: We fixed all known issues in fuse and afr that we could find but robert seems to be observing a perf drop for arbiter volumes. I'm yet to look into that.
08:20 itisravi post-factum: oh
08:21 post-factum itisravi: how could I help you to debug/provide more info on this issue?
08:22 itisravi post-factum: volume profile info would be the first place. you can see what FOPs are causing max latency in the profile info output.
08:22 itisravi poornimag: ^
08:23 itisravi poornimag:  post-factum is saying he's getting 40MB/s in 3.7.9 vs 60 in 3.7.6
08:24 ivan_rossi left #gluster
08:27 fsimonce joined #gluster
08:32 rafi anoopcs: ping
08:32 glusterbot rafi: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
08:33 ctria joined #gluster
08:35 post-factum itisravi: ok, while having free node for experiments, will do that
08:36 arcolife joined #gluster
08:39 itisravi post-factum: okay
08:41 Wizek_ joined #gluster
08:42 ovaistariq joined #gluster
08:42 post-factum itisravi: should i provide "profile info" output?
08:42 Apeksha joined #gluster
08:43 Apeksha joined #gluster
08:43 kshlm joined #gluster
08:44 itisravi post-factum: https://bugzilla.redhat.com/show_bug.cgi?id=1309462#c17
08:44 glusterbot Bug 1309462: low, unspecified, ---, ravishankar, MODIFIED , Upgrade from 3.7.6 to 3.7.8 causes massive drop in write performance.  Fresh install of 3.7.8 also has low write performance
08:46 post-factum ok, here are they:
08:46 post-factum server side: http://termbin.com/ad5u
08:47 post-factum client side: http://termbin.com/w4hg
08:48 post-factum i've created several zeroed files with dd and different block size, and then removed all the files
08:48 post-factum volume options are default, connection is 1gbe
08:49 post-factum FLUSH takes long time, eh?
08:52 itisravi UNLINK seems to have the highest average latency. wonder why that is.
08:52 aravindavk joined #gluster
08:54 post-factum itisravi: but unlink does not influence of write performance
08:54 post-factum *on
08:55 post-factum what i'm talking about is pure write performance with dd or file copying with midnight commander
08:57 poornimag post-factum, can you check by disabling write-behind? #gluster volume set <volname> write-behind off
08:58 post-factum poornimag: one moment plz
08:59 post-factum poornimag: no difference
08:59 post-factum did set, start/stop volume and remount
09:00 post-factum i guess that is because write-behind is off by default
09:00 post-factum :)
09:01 post-factum or no?
09:01 ashiq joined #gluster
09:01 post-factum ah, "set help" says it is on by default, sorry
09:01 post-factum so yep, no difference
09:01 haomaiwa_ joined #gluster
09:01 poornimag post-factum, write behind is on by default, so there should have been some difference, anyways, is it possible to provide the profile info for 3.7.6, so we can compare
09:02 hchiramm joined #gluster
09:02 sakshi joined #gluster
09:02 post-factum poornimag: well, when it is off, the speed is ~30 MB/s, when it is on, the speed is ~40 MB/s
09:03 poornimag post-factum, the profile should be for the exact same load, we see that the unlink takes longer in the profile which you have shared
09:03 poornimag post-factum, ahh ok, so write-behind not the culprit
09:03 post-factum poornimag: ok, will downgrade now and check again
09:03 poornimag post-factum, is the workload, dd followed by rm, for 8 files?
09:04 poornimag post-factum, cool, thanks
09:04 post-factum poornimag: correct, dd+rm
09:04 post-factum poornimag: with block size from 8M to 64k
09:04 post-factum (no difference with different block sizes, though)
09:07 post-factum poornimag: compiling 3.7.6 now
09:08 poornimag post-factum, ah ok
09:09 pgreg joined #gluster
09:09 kovshenin joined #gluster
09:11 post-factum i wish gluster could use cmake instead of autotools :)
09:13 ndevos post-factum: if you have convincing arguments, we could consider it for gluster-4.0
09:13 post-factum hm, got the same 30-40 MB/s
09:13 post-factum wtf
09:14 post-factum and with bs=1M — 52 MB/s
09:15 post-factum and with bs=64k — 20 MB/s
09:15 post-factum so unreliable results
09:18 itisravi post-factum: no bug then :)
09:19 post-factum itisravi: but why there is so huge difference between 1G and 10G?
09:20 itisravi post-factum: not sure but won't a higher network link mean more throughput?
09:20 social joined #gluster
09:21 itisravi but I guess that comes into effect only if the 1G network is saturated
09:21 post-factum 40 MB/s is not 1G max even with replica 2 (80 MB in total)
09:22 post-factum 3.7.6 server profiling: http://termbin.com/9c6i
09:22 post-factum and client: http://termbin.com/y533
09:23 post-factum hm, forgot about unlink
09:23 post-factum here is client unlink: http://termbin.com/o5sw
09:23 post-factum still huge latency for it, but doesn't matter for me
09:24 post-factum and server one: http://termbin.com/ua4l
09:26 post-factum ok then, nvm :)
09:29 maxadamo joined #gluster
09:29 maxadamo hey!
09:29 maxadamo I noticed today that this path is empty: http://download.gluster.org/pub/gluster/glusterfs/3.5/LATEST/Debian/
09:29 glusterbot Title: Index of /pub/gluster/glusterfs/3.5/LATEST/Debian (at download.gluster.org)
09:30 maxadamo while versions 3.4 and lower, and 3.6 and higher have files inside
09:30 post-factum ndevos: ^^
09:31 ndevos post-factum: hmm, maybe Debian has packages for 3.5 in its own repos?
09:31 Apeksha joined #gluster
09:32 maxadamo well, it was there before. I have a nagios check for our debmirror scripts, that started alerting yesterday.
09:33 ndevos maxadamo: hmm, I dont know... kkeithley and misc have access to download.g.o and may be able to check what happened
09:33 post-factum kkeithley: ^^
09:33 post-factum misc: ^^
09:33 post-factum :)
09:34 ndevos :)
09:37 Apeksha joined #gluster
09:37 maxadamo let's wait for kkeithley and misc :)
09:38 ndevos maxadamo: you can also send an email to gluster-infra@gluster.org, that is where the admins are reachable
09:38 maxadamo thanks ndevos I'll do it.
09:40 nehar joined #gluster
09:42 skoduri joined #gluster
09:46 misc post-factum: mhh ?
09:47 post-factum maxadamo: misc is here :)
09:47 misc I do not know what happened to that, I do have access as a admin until we figure a saner workflow
09:47 maxadamo hey!
09:47 misc but I do not handle much of what is going there
09:48 maxadamo I am only trying to understand if it was accidentaly or intentionally removed.
09:48 maxadamo if it was intenionally removed, I'll intentionally removed my debmirro script and I am done
09:49 misc well, I do not reemeber it being intentional, but people also would have no reason to tell me :/
09:51 Ulrar Has anyone experienced data corruption with 3.7.8 ? I have multiple proxmox clusters using glusterfs, the ones with 3.7.6 works fine but I have one with 3.7.8 having huge disk corruption problems, and I'm wondering if it's hardware of software
09:51 Ulrar Since I already found that NFS bug on 3.7.8, I do wonder if something else might be broken in that version
09:52 Vigdis joined #gluster
10:00 jiffin joined #gluster
10:01 yosafbridge joined #gluster
10:01 haomaiwa_ joined #gluster
10:09 MrAbaddon joined #gluster
10:14 B21956 joined #gluster
10:15 Apeksha joined #gluster
10:16 Debloper joined #gluster
10:16 Apeksha joined #gluster
10:29 [Enrico] joined #gluster
10:30 anti[Enrico] joined #gluster
10:32 yosafbridge joined #gluster
10:32 mbukatov joined #gluster
10:34 ovaistariq joined #gluster
10:35 luizcpg joined #gluster
10:36 Gnomethrower joined #gluster
10:42 hgowtham joined #gluster
10:43 ira joined #gluster
10:44 nbalacha joined #gluster
10:49 MrAbaddon joined #gluster
11:04 haomaiwa_ joined #gluster
11:10 nbalacha joined #gluster
11:18 baoboa joined #gluster
11:26 liibert hi all, does somebody has  experience running openvz simfs vm directly on glusterfs and/or is using distributed-disperse (13 x (4 + 2) = 78) volume over +20 hosts with 3 brick in every host
11:27 vmallika joined #gluster
11:28 post-factum liibert: no, your setup is unique to you. you should ask more specific question
11:29 Philambdo joined #gluster
11:31 dlambrig_ joined #gluster
11:35 ovaistariq joined #gluster
11:49 unclemarc joined #gluster
11:51 Bhaskarakiran joined #gluster
11:56 morse joined #gluster
12:01 haomaiwa_ joined #gluster
12:03 Philambdo joined #gluster
12:05 jiffin1 joined #gluster
12:08 EinstCrazy joined #gluster
12:14 nbalacha joined #gluster
12:14 Bhaskarakiran joined #gluster
12:17 DV__ joined #gluster
12:21 ramky joined #gluster
12:27 jiffin1 joined #gluster
12:31 TvL2386 joined #gluster
12:32 eljrax left #gluster
12:34 robb_nl joined #gluster
12:40 EinstCrazy joined #gluster
12:41 jiffin joined #gluster
12:41 itisravi joined #gluster
12:44 jiffin1 joined #gluster
12:48 jiffin joined #gluster
12:51 jiffin1 joined #gluster
12:53 jiffin joined #gluster
12:57 jiffin1 joined #gluster
13:00 johnmilton joined #gluster
13:01 jiffin joined #gluster
13:01 haomaiwa_ joined #gluster
13:01 mbukatov joined #gluster
13:03 jiffin1 joined #gluster
13:07 haomaiwang joined #gluster
13:09 kshlm joined #gluster
13:09 sakshi joined #gluster
13:15 jiffin1 joined #gluster
13:18 arcolife joined #gluster
13:21 hackman joined #gluster
13:23 ovaistariq joined #gluster
13:24 shubhendu joined #gluster
13:24 jiffin joined #gluster
13:30 kanagaraj joined #gluster
13:32 EinstCrazy joined #gluster
13:33 mpietersen joined #gluster
13:34 DV joined #gluster
13:35 mpietersen joined #gluster
13:37 mpietersen joined #gluster
13:37 jiffin joined #gluster
13:38 gowtham joined #gluster
13:38 mpietersen joined #gluster
13:39 mpietersen joined #gluster
13:42 DV joined #gluster
13:44 kdhananjay joined #gluster
13:45 rastar joined #gluster
13:49 skylar joined #gluster
13:49 rwheeler joined #gluster
13:51 wushudoin joined #gluster
13:54 wushudoin joined #gluster
13:55 scobanx joined #gluster
13:55 mowntan joined #gluster
13:56 scobanx Hi, May I ask what is the purpose of .glusterfs/unlink  directory in bricks?
13:57 kdhananjay scobanx: files with open fds whose paths are unlinked by the application will be moved to that directory until the fd is closed.
13:58 scobanx kdhananjay: Thanks for the answer
14:05 EinstCrazy joined #gluster
14:14 scobanx kdhananjay: I am doing some write/delete tests and there are thousands deleted lines in 'lsof | grep deleted' which are hold by glusterfs process. When those FDs will be closed?
14:14 DV joined #gluster
14:18 EinstCrazy joined #gluster
14:21 scobanx I am doing some write/delete tests and there are thousands deleted lines in 'lsof | grep deleted' which are hold by glusterfs process. When those FDs will be closed?
14:22 luizcpg_ joined #gluster
14:22 amye joined #gluster
14:23 coredump joined #gluster
14:27 luizcpg joined #gluster
14:30 shaunm joined #gluster
14:31 luizcpg joined #gluster
14:35 liibert joined #gluster
14:44 dtrainor joined #gluster
14:55 vmallika joined #gluster
14:55 harish_ joined #gluster
14:56 haomaiwa_ joined #gluster
14:58 Saravanakmr joined #gluster
15:00 klaas joined #gluster
15:00 scobanx Anyone online?
15:01 haomaiwa_ joined #gluster
15:02 lkoranda joined #gluster
15:02 rastar joined #gluster
15:04 * post-factum is online, but haven't observed such a behavior
15:04 post-factum what glusterfs version do you use/
15:04 post-factum ?
15:05 scobanx 3.7.9
15:05 vmallika joined #gluster
15:07 scobanx post-factum: I am using 3.7.9, is this a normal behavior?
15:08 post-factum i doubt. checked my 3.7.9 and saw no "deleted" lines
15:08 post-factum try to check your logs first
15:11 ovaistariq joined #gluster
15:11 mzinkf joined #gluster
15:14 scobanx No errors in brick logs
15:15 scobanx There is an error about not finding bitrod.so but it is not related i think
15:15 Iodun joined #gluster
15:21 skoduri joined #gluster
15:22 post-factum but talking about glusterfs process, it is client side
15:22 post-factum what about ut?
15:22 post-factum *it
15:23 pgreg_ joined #gluster
15:25 wushudoin joined #gluster
15:26 scobanx sorry this is server side, glusterfsd process holding FDs
15:26 luizcpg_ joined #gluster
15:27 Trefex joined #gluster
15:28 ndevos scobanx: how long do these files stay in their '(deleted)' state?
15:29 ndevos scobanx: there is a "janitor" in the posix xlator, and I thought that one would close open file-descriptors over time
15:29 hagarth joined #gluster
15:29 shaunm joined #gluster
15:29 hagarth joined #gluster
15:30 scobanx ndevos: I started test 6 hours ago and just saw the deleted lines in lsof
15:30 scobanx So They will go automatically after some time?
15:31 ndevos scobanx: six hours sounds long, but it would be possible that a client still uses those files, I guess
15:31 semiautomatic joined #gluster
15:31 ndevos scobanx: on a local filesystem you can also open a file, read/write to it, delete it, and only close it after a while
15:31 scobanx Client is fio, it generated files and then I deleted those files.
15:32 scobanx fio is in server mode working as deamon maybe it still holds FDs in client
15:32 scobanx I will try to kill fio processes in clients..
15:32 ndevos I do not think it is an issue, they might get deleted eventually, maybe based on time, or on the number of open file-descriptors
15:33 scobanx So when they got deleted, files in the folder in .glusterfs/unlink will get deleted too?
15:34 ndevos yes, I would expect so
15:34 scobanx Ok I will monitor and ask here again if they are still here..
15:36 [Enrico] joined #gluster
15:38 armyriad joined #gluster
15:45 camg joined #gluster
15:54 scobanx Hi, I have a 60 node cluster and some gluster volume set commands return Error: request timeout but sets the value. Is this a problem?
15:54 tswartz joined #gluster
15:57 jiffin joined #gluster
15:58 calavera joined #gluster
16:01 haomaiwa_ joined #gluster
16:05 ovaistariq joined #gluster
16:06 post-factum joined #gluster
16:18 d0nn1e joined #gluster
16:20 hagarth joined #gluster
16:20 F2Knight joined #gluster
16:22 haomaiwa_ joined #gluster
16:23 haomaiwang joined #gluster
16:24 haomaiwang joined #gluster
16:25 haomaiwa_ joined #gluster
16:26 haomaiwang joined #gluster
16:26 ovaistariq joined #gluster
16:27 haomaiwa_ joined #gluster
16:28 haomaiwang joined #gluster
16:29 haomaiwang joined #gluster
16:30 7GHAALQ0C joined #gluster
16:31 drankis joined #gluster
16:31 haomaiwang joined #gluster
16:31 Wizek_ joined #gluster
16:32 haomaiwa_ joined #gluster
16:33 haomaiwang joined #gluster
16:34 haomaiwa_ joined #gluster
16:35 haomaiwa_ joined #gluster
16:36 haomaiwa_ joined #gluster
16:37 18WAAIF5W joined #gluster
16:38 haomaiwang joined #gluster
16:39 21WAAITW8 joined #gluster
16:40 haomaiwa_ joined #gluster
16:40 dlambrig_ joined #gluster
16:41 haomaiwang joined #gluster
16:41 Gnomethrower joined #gluster
16:42 haomaiwang joined #gluster
16:43 haomaiwang joined #gluster
16:44 jiffin1 joined #gluster
16:48 jiffin1 joined #gluster
17:00 ninkotech joined #gluster
17:00 ninkotech_ joined #gluster
17:04 semiautomatic1 joined #gluster
17:06 semiautomatic1 joined #gluster
17:09 robb_nl joined #gluster
17:24 ninkotech joined #gluster
17:25 nbalacha joined #gluster
17:28 jri_ joined #gluster
17:31 atalur joined #gluster
17:39 jlp1 joined #gluster
17:40 jlp1 i have a cluster with 4 peers.  2 of the peers show a status of "Peer Rejected" for the other.  any ideas?
17:40 bennyturns joined #gluster
17:49 semiautomatic joined #gluster
17:50 dblack joined #gluster
17:52 robb_nl joined #gluster
17:52 jiffin joined #gluster
17:53 post-factum jlp1: you have volume metadata mismatch
17:53 post-factum jlp1: backup everything under /var/lib/glusterd/vols and perform "gluster volume sync"
17:55 ovaistariq joined #gluster
17:55 plarsen joined #gluster
18:08 gbox_afk joined #gluster
18:09 coredump joined #gluster
18:11 jlp1 thanks for the tip.  i'll try that.  anyone have any pointers for a peer status of "Sent and Received peer request"
18:11 hagarth joined #gluster
18:17 nathwill joined #gluster
18:17 post-factum jlp1: that should be smth about firewall
18:18 jhyland joined #gluster
18:21 jlp1 the firewall is stopped on all hosts
18:22 jlp1 post-factum: after a "gluster volume sync server1 all", 1 host now shows "Peer Rejected" on all other hosts.
18:22 jlp1 thank you guys for your help
18:25 kovshenin joined #gluster
18:27 jobewan joined #gluster
18:38 kpease joined #gluster
18:58 scobanx_ joined #gluster
19:09 virusuy Hi guys, i have a distributed-duplicated volume and for some reason, if i copy 2 files to that volume, both goes to one node (and it's replica)
19:09 virusuy if i create a folder in the volume appears in all the nodes correctly, but files, only goes to one pair only
19:09 virusuy it's that normal ?
19:13 virusuy (Oh, i forgot to say that it's a 2x2 distributed-replicated volume)
19:26 post-factum jlp1: did you do sync once again?
19:26 post-factum virusuy: could you show us volume layout with bricks names and where each file goes?
19:27 atalur joined #gluster
19:27 virusuy post-factum: sure
19:28 F2Knight_ joined #gluster
19:29 virusuy https://bpaste.net/show/d78e49c57f33
19:29 glusterbot Title: show at bpaste (at bpaste.net)
19:31 jlp1 post-factum: i tried the sync multiple times and restarted glusterd after each sync
19:33 F2Knight joined #gluster
19:37 swebb joined #gluster
19:37 post-factum virusuy: which brick order was specified on volume creation?
19:37 post-factum jlp1: i believe, you should change hostname on second sync
19:37 F2Knight joined #gluster
19:38 sage joined #gluster
19:38 ahino joined #gluster
19:43 virusuy post-factum: the same as in the bpaste .. (gluster-server5, 4, 3, 2 )
19:47 F2Knight joined #gluster
19:51 F2Knight joined #gluster
19:51 post-factum virusuy: so 3 and 2 is one replica, and 5/4 is another
19:51 virusuy Indeed , post-factum
19:52 post-factum virusuy: have you tried to create different folders, subfolders and files in them?
19:52 post-factum virusuy: were all bricks specified on volume creation, or you have added them later with add-brick?
19:53 virusuy post-factum: all bricks were specified on volume creation, yes
19:53 F2Knight joined #gluster
19:53 virusuy post-factum: i tried with folders, but not subfolder and files within
19:56 post-factum virusuy: try that first, and also try to launch rebalance manually
19:58 virusuy post-factum: i'll
19:59 virusuy how can i do that ?
19:59 virusuy rebalance manually , i mean
19:59 F2Knight joined #gluster
20:04 post-factum @rebalance
20:04 glusterbot post-factum: I do not know about 'rebalance', but I do know about these similar topics: 'replace'
20:04 post-factum hm, okay
20:04 post-factum gluster volume rebalance <VOLNAME> {{fix-layout start} | {start [force]|stop|status}} - rebalance operations
20:04 post-factum e.g., gluster volume rebalance foobar start
20:04 post-factum and then just check status to be "completed"
20:04 post-factum @learn
20:04 glusterbot post-factum: (learn [<channel>] <key> as <value>) -- Associates <key> with <value>. <channel> is only necessary if the message isn't sent on the channel itself. The word 'as' is necessary to separate the key from the value. It can be changed to another word via the learnSeparator registry value.
20:06 post-factum @learn rebalance as To start volume rebalance, type "gluster volume rebalance VOLUMENAME start", and then check its status to be completed with "gluster volume rebalance VOLUMENAME status"
20:06 glusterbot post-factum: The operation succeeded.
20:06 post-factum @rebalance
20:06 glusterbot post-factum: To start volume rebalance, type gluster volume rebalance VOLUMENAME start , and then check its status to be completed with gluster volume rebalance VOLUMENAME status
20:06 post-factum much better, glusterbot
20:06 post-factum but where are my quotes
20:06 post-factum @forget rebalance
20:06 glusterbot post-factum: The operation succeeded.
20:07 post-factum @learn rebalance as To start volume rebalance, type 'gluster volume rebalance VOLUMENAME start', and then check its status to be completed with 'gluster volume rebalance VOLUMENAME status'
20:07 glusterbot post-factum: The operation succeeded.
20:07 post-factum @rebalance
20:07 glusterbot post-factum: To start volume rebalance, type 'gluster volume rebalance VOLUMENAME start', and then check its status to be completed with 'gluster volume rebalance VOLUMENAME status'
20:07 post-factum good boy
20:07 post-factum glusterbot++
20:07 glusterbot post-factum: glusterbot's karma is now 9
20:17 johnmilton joined #gluster
20:19 amye joined #gluster
20:19 MrAbaddon joined #gluster
20:30 F2Knight joined #gluster
20:36 Nybble01 joined #gluster
20:37 cliluw joined #gluster
20:38 Nybble01 left #gluster
20:39 F2Knight joined #gluster
20:42 virusuy glusterbot**
20:42 virusuy glusterbot++
20:42 glusterbot virusuy: glusterbot's karma is now 10
20:43 virusuy post-factum: it's a good practice run manual balance manually ?
20:44 post-factum virusuy: it should be done in brick manipulation and shouldn't be done usually
20:44 post-factum *on brick
20:47 Nybble joined #gluster
20:50 F2Knight_ joined #gluster
20:55 BitByteNybble110 joined #gluster
21:01 arcolife joined #gluster
21:06 virusuy post-factum: ok
21:10 johnmilton joined #gluster
21:22 Philambdo joined #gluster
21:23 JesperA joined #gluster
21:29 calavera joined #gluster
21:32 Pupeno joined #gluster
21:32 kovsheni_ joined #gluster
21:33 ovaistariq joined #gluster
21:35 hagarth joined #gluster
21:35 tyler274 joined #gluster
21:36 lh joined #gluster
21:40 Pupeno joined #gluster
21:43 virusuy joined #gluster
21:45 Pupeno joined #gluster
21:46 mrEriksson joined #gluster
21:47 calavera joined #gluster
22:01 jhyland joined #gluster
22:05 sc0 joined #gluster
22:05 Chr1st1an joined #gluster
22:27 nathwill joined #gluster
22:30 Pupeno joined #gluster
22:52 mowntan joined #gluster
22:56 johnmilton joined #gluster
23:01 DV joined #gluster
23:21 ovaistariq joined #gluster
23:34 unforgiven512 joined #gluster
23:58 hackman joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary