Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-11-10

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwa_ joined #gluster
00:01 Mr_Psmith joined #gluster
00:05 zhangjn joined #gluster
00:12 gildub joined #gluster
00:34 tessier joined #gluster
00:36 alghost joined #gluster
00:37 nangthang joined #gluster
00:41 mlhamburg_ joined #gluster
00:52 ira joined #gluster
00:59 zhangjn joined #gluster
01:01 7YUAAADG2 joined #gluster
01:05 EinstCra_ joined #gluster
01:16 EinstCrazy joined #gluster
01:17 Lee1092 joined #gluster
01:18 jobewan joined #gluster
01:19 DV joined #gluster
01:20 Guest____ joined #gluster
01:30 EinstCra_ joined #gluster
01:30 zhangjn_ joined #gluster
02:01 haomaiwa_ joined #gluster
02:05 jobewan joined #gluster
02:13 nangthang joined #gluster
02:29 jobewan joined #gluster
02:33 jmarley joined #gluster
02:33 jmarley joined #gluster
02:41 skylar1 joined #gluster
02:51 haomaiwa_ joined #gluster
02:59 kotreshhr joined #gluster
02:59 kotreshhr left #gluster
03:01 haomaiwa_ joined #gluster
03:30 atinm joined #gluster
03:34 ira joined #gluster
03:35 overclk joined #gluster
03:36 itisravi joined #gluster
03:38 kshlm joined #gluster
03:38 haomaiwa_ joined #gluster
03:39 Jmainguy joined #gluster
03:39 Jmainguy team: http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/pub.key 404's
03:40 Jmainguy Romeor: semiosis purpleidea johnmark JoeJulian hagarth a2 ^^
03:44 sakshi joined #gluster
03:45 skylar1 joined #gluster
03:51 calavera joined #gluster
03:56 RameshN joined #gluster
04:01 haomaiwa_ joined #gluster
04:02 shubhendu joined #gluster
04:10 dusmant joined #gluster
04:11 gem joined #gluster
04:12 pppp joined #gluster
04:13 nbalacha joined #gluster
04:20 calavera joined #gluster
04:20 overclk_ joined #gluster
04:24 TheSeven joined #gluster
04:24 nbalacha joined #gluster
04:29 kdhananjay joined #gluster
04:33 skylar1 joined #gluster
04:37 kanagaraj joined #gluster
04:40 hagarth Jmainguy: thanks for noticing that, it is happening due to the new 3.7.6 directory structure. Will get somebody to address this.
04:42 hagarth rastar, kkeithley: pub.key needs to be populated in http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.6/EPEL.repo/
04:42 glusterbot Title: Index of /pub/gluster/glusterfs/3.7/3.7.6/EPEL.repo (at download.gluster.org)
04:43 itisravi joined #gluster
04:43 ramteid joined #gluster
04:45 Jmainguy sure thing, love gluster, keep up the good work
04:47 jiffin joined #gluster
04:49 kotreshhr joined #gluster
04:54 aravindavk joined #gluster
04:56 haomaiwa_ joined #gluster
05:01 64MAEAF7N joined #gluster
05:01 skoduri joined #gluster
05:02 David_Varghese joined #gluster
05:02 atalur joined #gluster
05:03 Humble joined #gluster
05:06 rafi joined #gluster
05:09 ndarshan joined #gluster
05:23 pppp joined #gluster
05:27 pppp joined #gluster
05:27 vimal joined #gluster
05:28 pppp joined #gluster
05:28 jobewan joined #gluster
05:35 ppai joined #gluster
05:36 pppp joined #gluster
05:38 pppp joined #gluster
05:42 hgowtham joined #gluster
05:44 anil joined #gluster
06:00 ashiq joined #gluster
06:01 17SAD0488 joined #gluster
06:02 nishanth joined #gluster
06:02 hgowtham joined #gluster
06:13 d-fence joined #gluster
06:23 Apeksha joined #gluster
06:30 overclk joined #gluster
06:38 kotreshhr1 joined #gluster
06:46 rafi joined #gluster
06:48 vmallika joined #gluster
07:01 6JTACF6FO joined #gluster
07:06 tg2 joined #gluster
07:07 skoduri joined #gluster
07:09 mhulsman joined #gluster
07:19 jtux joined #gluster
07:30 R0ok_ joined #gluster
07:32 overclk joined #gluster
07:32 gb21 joined #gluster
07:48 jtux joined #gluster
07:52 EinstCrazy joined #gluster
07:56 overclk left #gluster
07:58 rafi joined #gluster
08:01 overclk joined #gluster
08:01 haomaiwa_ joined #gluster
08:01 atinm joined #gluster
08:02 kotreshhr joined #gluster
08:03 pppp joined #gluster
08:06 rafi joined #gluster
08:10 [Enrico] joined #gluster
08:11 haomaiwa_ joined #gluster
08:17 arcolife joined #gluster
08:17 21WAAAL6M joined #gluster
08:19 EinstCrazy joined #gluster
08:28 deniszh joined #gluster
08:33 Kenneth joined #gluster
08:34 deepakcs joined #gluster
08:36 bhuddah joined #gluster
08:37 mlhamburg joined #gluster
08:38 Humble joined #gluster
08:41 kovshenin joined #gluster
08:44 Eychenz joined #gluster
08:52 aravindavk joined #gluster
08:57 Eychenz joined #gluster
09:01 haomaiwang joined #gluster
09:02 EinstCra_ joined #gluster
09:05 muneerse joined #gluster
09:09 ctria joined #gluster
09:10 atinm joined #gluster
09:14 nis joined #gluster
09:15 nis anyone working with glusterfs on AWS?
09:15 ndevos nis: quite some people do, but I never know who in this channel does
09:16 nis I am using glusterfs 3.6.1 (stable) in the following configuration: 2 servers, 10bricks each , distributed mode (no replication) and I keep getting 100% utilization on some bricks
09:17 ndevos nis: what is the commandline from "ps ax" for that process? you can use "top" to find the PID
09:17 nis In addition all my devices are Generic SSD providing 900IOPS and filesystem behind is ext4 with block size 4K
09:17 Slashman joined #gluster
09:18 nis ndevos: which process do you mean?
09:19 ndevos nis: oh, well, maybe I misunderstood "100% utilization on some bricks", could you explain that a little more?
09:20 nis I wonder what can be done to increase performance to the maximum, obviously each instance should be able to reach a max throughput 900*4K*10=35MB/s but this nothing ... I would like to reach as close as possible to NIC limit which is 100MB/s
09:22 spalai joined #gluster
09:22 nis ndeovs: when I run 'iostats -x 1 10000' on gluster node , I see all bricks stats and on some bricks I get 100% util for a short periods of time, which means device is busy and cannot accept more data
09:24 overclk nis: what all is enabled for the volume? I mean configuration if any..
09:24 nis default configuration
09:24 ndevos nis: do you know if your brick filesystem supports such a throughput?
09:25 EinstCrazy joined #gluster
09:25 nis ndevos: my backend filesystem is using SSD with 900 IOPS/s
09:25 nis ndevos: for each brick
09:26 nis ndevos: what is the best way to tell gluster volume to fill a buffer before flushing to the disk ?
09:26 nis ndevos: for both read & write ?
09:27 nis ndevos: I know AWS consider up to 256K as 1 IO operation
09:27 ndevos nis: you should probably run the iostats test on the filesystem of the brick directly to see how that performs
09:28 ndevos nis: after that, you mount through Gluster, and see how much the difference is when going over the network
09:28 necrogami joined #gluster
09:29 necrogami joined #gluster
09:29 ndevos nis: also, check if the network actually performs well, I think a tool called netperf can help with that
09:30 necrogami joined #gluster
09:31 nis one moment
09:35 nis ndevos: is there a way to tell gluster to flush data to disk in chunks of 256K ?
09:36 nis ndevos: or in another word created flush buffer
09:39 s-hell joined #gluster
09:39 s-hell hi guys!
09:40 plarsen joined #gluster
09:41 ndevos nis: flushing is mostly controlled by the application, I'm not sure if Gluster can introduce flushes for you
09:41 s-hell I want to create a geo-replication from one node to two other nodes. Do i have to create two geo replications or is there a way to create one geo-replicaton and define two "destinations" ?
09:42 overclk s-hell: you'd need to create two sessions (one for each slave endpoint)
09:43 s-hell overclk: Ok, thanks for the info
09:43 nis ndevos: so what is the write behind option means?
09:44 ndevos nis: that is a client-side option, it can cache small writes and combine multiple subsequent ones into a single bigger write
09:47 kotreshhr left #gluster
09:48 s-hell Can any one help me with this error: https://paste.pcspinnt.de/view/dd9e0266
09:48 glusterbot Title: Untitled - Paster pcpsinnt.de (at paste.pcspinnt.de)
09:48 mattmcc joined #gluster
09:48 s-hell One of my georeplications gets faulty and i don't know why
09:48 nis ndevos: that is good, sound like something I need to reduce number of IO operation to glusterfs
09:48 nis ndevos: can you provide more info
09:49 ndevos nis: hmm, not sure if there is more info...
09:49 * ndevos checks
09:50 nis ndevos: are there any other option available for client side tunning? I didn't see this option refer to client side on doc (maybe I am missing ...)
09:51 sakshi joined #gluster
09:51 ndevos nis: sorry, I'm not so much into performance tuning options...
09:52 ndevos nis: https://github.com/gluster/glusterfs/blob/master/doc/developer-guide/write-behind.md
09:52 glusterbot Title: glusterfs/write-behind.md at master · gluster/glusterfs · GitHub (at github.com)
09:53 spalai joined #gluster
09:56 nis ndevos: thanks , you were great help
09:56 nis can anyone explain what is the flush behind option ?
10:01 haomaiwang joined #gluster
10:02 nis ndevos: is performance.writebehind-window-size enabled by default or should I explicitly define it per volume ?
10:03 nis ndevos: does performance.writebehind-window-size require performance.flushbehind options set?
10:04 ndevos nis: you can see the defaults in "gluster volume set help" and "gluster volume get $VOLUME all"
10:06 dusmantkp_ joined #gluster
10:07 overclk nis: flush-behind is enabled by default, so, the default of 1MB window-size would be in effect.
10:09 nis ndevos: there is no such 'volume get $VOLUME all'
10:10 nis ndevos: I have 'volume set $VOLUME ...'
10:10 nis ndevos: how do I print all volume options ? info won't provide it
10:10 ndevos nis: ah, the "get" is relatively new, maybe that is only in glusterfs-3.7
10:11 ndevos nis: the "get" was written to show all the volume options, it was not really possible in earlier versions
10:13 nis ndevos: got it, what about version 3.6.1 stable ... can I assume default options apply or is there a way to check it ?
10:13 atinm nis, volume get is not there in 3.6
10:13 atinm nis, however gluster volume set help actually lists down all the option in stdout
10:14 ndevos nis: "gluster volume info" lists the options that have been changed for the volume, the others from "gluster volume set help" are kept default
10:18 SeerKan joined #gluster
10:18 SeerKan Hi guys
10:19 SeerKan When I get a split brain situation on a folder... do I need to delete the entire folder ?
10:20 LebedevRI joined #gluster
10:23 lh joined #gluster
10:26 SeerKan tried to do this https://github.com/gluster/glusterfs/blob/master/doc/debugging/split-brain.md but I get Input/output error on both servers
10:26 glusterbot Title: glusterfs/split-brain.md at master · gluster/glusterfs · GitHub (at github.com)
10:28 s-hell One of my georeplications gets faulty and i don't know why
10:28 [Enrico] joined #gluster
10:30 EinstCra_ joined #gluster
10:32 EinstCr__ joined #gluster
10:34 gem joined #gluster
10:38 overclk s-hell: the other sessions are fine? what different is in this session?
10:38 s-hell overclk: i 've use the same commands for all sessions.
10:40 overclk s-hell: and you configured this session similar to the rest?
10:41 overclk s-hell: logs would help.
10:41 gildub joined #gluster
10:42 s-hell overclk: https://paste.pcspinnt.de/view/b2ef6f18
10:42 glusterbot Title: Untitled - Paster pcpsinnt.de (at paste.pcspinnt.de)
10:44 overclk s-hell: I can't figure out much from the log. aravindavk, any idea?
10:52 overclk s-hell what's in geo-rep log (not the client log).
10:56 nis quit
10:56 s-hell overclk: https://paste.pcspinnt.de/view/1d60d961
10:56 glusterbot Title: Untitled - Paster pcpsinnt.de (at paste.pcspinnt.de)
10:57 s-hell looks like a problem with rsync
10:59 kkeithley1 joined #gluster
11:00 overclk s-hell: not really an rsync problem but something related to a configuration issue I think.
11:00 overclk s-hell
11:00 overclk s-hell: mind rechecking this sessions setup (keys, etc..) ?
11:01 s-hell overclk: I've allready delete the georeplication, the volume and recreated everything
11:01 aravindavk overclk: s-hell checking
11:01 haomaiwa_ joined #gluster
11:01 s-hell overclk: there is allread a running session. same user, same key, just different volumes.
11:01 EinstCrazy joined #gluster
11:02 aravindavk s-hell: is it inside container?
11:02 overclk s-hell, OK. aravindavk is here. he cracks these cases in no time :)
11:02 s-hell aravindavk: you mean mountbroker?
11:02 s-hell yes it is.
11:03 aravindavk s-hell: any port mapping used for ssh?
11:03 s-hell aravindavk: no, default port.
11:03 aravindavk overclk: :)
11:05 ppai joined #gluster
11:07 aravindavk s-hell: seen this error recently when Gluster vol and Geo-rep run inside docker container. The error is caused by gsyncd process on slave. We are still trying to find root cause for the same
11:07 aravindavk overclk: this error when gsyncd shell on slave does validation while spawning rsync process
11:08 s-hell i had this error earlier. But that was still in my testing enviroment. After playing around recreating volumes and georeplication it disappeard. Till now :-(
11:09 aravindavk overclk: line 271: $SRC/geo-replication/src/gsyncd.c
11:09 s-hell aravindavk: Anything i van do?
11:09 suliba joined #gluster
11:10 aravindavk s-hell: provide us the details of setup. We will try to reproduce in our setup.
11:10 s-hell aravindavk: 1 Master, 2 Slave, 2 Volumes
11:10 aravindavk s-hell: I can provide manual workaround steps to fix the issue.
11:11 s-hell aravindavk: sounds good
11:11 skoduri joined #gluster
11:12 aravindavk remove command="/usr/libexec/glusterfs/gsyncd" from /home/geouser/.ssh/authorized_keys file from all Slave nodes
11:12 s-hell aravindavk: did it.
11:13 aravindavk s-hell: update gsyncd.conf file /var/lib/glusterd/geo-replication/<MASTERVOL>_<SLAVEHOST>_<SLAVEVOL>/gsyncd.conf
11:13 aravindavk s-hell: replace /nonexistent/gsyncd with actual gsyncd path in all slave nodes
11:14 aravindavk s-hell: stop and start Geo-rep, it should work
11:14 jhi-iain joined #gluster
11:16 jhi-iain left #gluster
11:17 s-hell aravindavk: Ohoh, now all my replications got faulty
11:18 imilne joined #gluster
11:19 aravindavk s-hell: Faulty with permission denied or different reason?
11:19 s-hell aravindavk: permission denied.
11:21 Manikandan joined #gluster
11:21 overclk aravindavk: ah ok!
11:21 s-hell aravindavk: got it. I'm so stupid.
11:23 s-hell remove the whole key :-(
11:28 s-hell aravindavk: Now it works. Thanks for help.
11:32 ndevos REMINDER: Gluster Bug Triage starts in ~30 minutes in #gluster-meeting
11:34 spalai joined #gluster
11:36 kkeithley_ ndevos: I may be a few minutes late to bug triage
11:37 ndevos kkeithley_: ok
11:37 jiffin kkeithley_: can u please merge the backport http://review.gluster.org/#/c/12483/?
11:37 glusterbot Title: Gerrit Code Review (at review.gluster.org)
11:39 kkeithley_ jiffin: done
11:39 jiffin kkeithley_++ thanks
11:39 glusterbot jiffin: kkeithley_'s karma is now 3
11:41 firemanxbr joined #gluster
11:50 dusmantkp_ joined #gluster
11:53 julim joined #gluster
12:01 haomaiwa_ joined #gluster
12:02 ndevos REMINDER: Gluster Bug Triage starts *now* in #gluster-meeting
12:05 sakshi joined #gluster
12:07 bkunal joined #gluster
12:11 kovshenin joined #gluster
12:12 nishanth joined #gluster
12:15 ndarshan joined #gluster
12:20 TvL2386 joined #gluster
12:24 Norky joined #gluster
12:27 Mr_Psmith joined #gluster
12:37 obnox joined #gluster
12:37 diegows joined #gluster
12:38 mswart joined #gluster
12:46 DV_ joined #gluster
12:49 cabillman joined #gluster
12:52 haomaiwa_ joined #gluster
12:55 SeerKan Having a very strange split brain situation on a folder, details on http://pastie.org/private/0orkuyqdw97wue1netiww ... any help is very appreciated..
12:55 glusterbot Title: Private Paste - Pastie (at pastie.org)
13:07 zhangjn joined #gluster
13:08 marcoc_ joined #gluster
13:09 marcoc_ Hi all. gluster deamon died on one node. There was a self healing running
13:09 marcoc_ [2015-11-10 13:00:02.763735] E [rpc-clnt.c:362:saved_frames_unwind] (--> /usr/lib64/libglusterfs.so.0(_gf_log_callingfn+0x1eb)[0x7f1fc61176bb] (--> /usr/lib64/libgfrpc.so.0(saved_frames_unwind+0x1e7)[0x7f1fc5ee31d7] (--> /usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f1fc5ee32ee] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xab)[0x7f1fc5ee33bb] (--> /usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x1c2)[0x7f1fc5ee39f2] ))))) 0-glusterfs: fo
13:09 marcoc_ rced unwinding frame type(GlusterFS Handshake) op(GETSPEC(2)) called at 2015-11-10 12:59:53.533825 (xid=0x2)
13:09 marcoc_ [2015-11-10 13:00:04.233812] E [socket.c:2278:socket_connect_finish] 0-VOL_EXPORT-client-2: connection to 192.168.50.20:24007 failed (Connection refused)
13:09 glusterbot marcoc_: ('s karma is now -114
13:09 glusterbot marcoc_: ('s karma is now -115
13:09 marcoc_ [2015-11-10 13:00:13.242608] E [socket.c:2278:socket_connect_finish] 0-glusterfs: connection to 127.0.0.1:24007 failed (Connection refused)
13:09 glusterbot marcoc_: ('s karma is now -116
13:10 glusterbot marcoc_: ('s karma is now -117
13:10 glusterbot marcoc_: ('s karma is now -118
13:10 the-me joined #gluster
13:11 bhuddah marcoc_: can you pastebin that?
13:12 marcoc_ http://pastebin.com/7R7AXjp5
13:12 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
13:12 skylar1 joined #gluster
13:12 marcoc_ @paste
13:12 glusterbot marcoc_: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
13:13 marcoc_ http://paste.ubuntu.com/13215715/
13:14 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:14 bhuddah well. you have permission denied errors?
13:16 marcoc_ I run a brick remove lowering a replica by 1 on a 2x3 volume
13:16 DV joined #gluster
13:17 marcoc_ I got "Connection failed. Please check if gluster daemon is operational.
13:17 marcoc_ "
13:17 DV_ joined #gluster
13:18 marcoc_ one hour ago I added some new bricks, started healing etc
13:19 uebera|| joined #gluster
13:19 mswart joined #gluster
13:20 ira joined #gluster
13:23 marcoc_ cat etc-glusterfs-glusterd.vol.log http://paste.ubuntu.com/13215750/
13:23 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:26 jmarley joined #gluster
13:27 Norky I have a 4-brick distributed, replicated (2) volume. I want to remove 2 bricks, so that there is no replication, leaving 2 (distributed) bricks in the volume. I thought remove-brick, setting the replica count to 1 was the correct way to do this, but that gives me:
13:27 Norky "volume remove-brick start: failed: Migration of data is not needed when reducing replica count. Use the 'force' option"
13:29 Norky http://fpaste.org/288802/
13:29 glusterbot Title: #288802 Fedora Project Pastebin (at fpaste.org)
13:30 imilne left #gluster
13:32 marcoc_ Norky I suggest you to run before a full heal and then remove the brick using force
13:32 Norky how should I reduce the volume to two bricks? Reduce the replica count first, then remove two bricks?
13:33 Norky this is a test volume I have just created, so it's not itself important
13:33 vimal joined #gluster
13:33 Norky I will run the heal though
13:33 marcoc_ so just reduce the replica level and remove the bricks in the same command with the force option
13:35 marcoc_ bhuddah: what do you mean?
13:35 Norky hmm, okay. The --force option makes me nervous for when I come to do this on real data, so I wanted to check if I was doing something wrong
13:38 marcoc_ force is there to warn you about what are you doing. Because you'll loose data if remove the wrong bricks I think
13:38 marcoc_ I though the same like you the first time doing a remove
13:39 marcoc_ service glusterd start ---> [FAILED]
13:39 glusterbot marcoc_: -'s karma is now -349
13:40 Norky glusterbot's karma-tracking really needs an exclusion list :/
13:50 haomaiwa_ joined #gluster
13:54 turkleton joined #gluster
13:55 turkleton nis: Another potential issue with using GFS at AWS is the network throughput from node to node, although I'm not sure it'd really impact your current set up since it's just distributed with no replication.
13:56 marcoc_ glusterd doesn't start anymore on one node. Log: http://paste.ubuntu.com/13215902/             some warning and a couple of errors
13:56 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
13:58 marcoc_ I copied the files  /var/lib/glusterd/vols/VOL_EXPORT/bricks/s20gfs.ovirt.prisma:-gluster-VOL_EXPORT-brick3 and /var/lib/glusterd/vols/VOL_EXPORT/bricks/s21gfs.ovirt.prisma\:-gluster-VOL_EXPORT-brick2  from other node. now glusterd started
13:58 marcoc_ why they got delete and glusterd failed?
13:58 marcoc_ # glusterd --version
13:58 marcoc_ glusterfs 3.7.5 built on Oct  7 2015 16:27:04
14:01 haomaiwa_ joined #gluster
14:13 Norky marcoc_, thank you, using force worked fine on my test volume. Also, clients carried on happily accessing the volume while the the change was made, which is what I wanted to test :)
14:18 ira joined #gluster
14:18 poornimag joined #gluster
14:22 hgowtham joined #gluster
14:29 marcoc_ Norky you are wellcome. Pay attention with brick remove start... I lost some file
14:30 dgandhi joined #gluster
14:31 marcoc_ http://www.gluster.org/pipermail/gluster-users/2015-October/024121.html
14:31 glusterbot Title: [Gluster-users] Missing files after add new bricks and remove old ones - how to restore files (at www.gluster.org)
14:31 marcoc_ I'm a bit worried because I'm getting a lot of [2015-11-10 14:29:39.499073] E [MSGID: 106108] [glusterd-syncop.c:1069:_gd_syncop_commit_op_cbk] 0-management: Failed to aggregate response from  node/brick
14:31 marcoc_ in the log
14:37 rafi joined #gluster
14:40 bhuddah joined #gluster
14:41 hamiller joined #gluster
14:47 zhangjn joined #gluster
14:50 shubhendu joined #gluster
14:53 amye joined #gluster
14:53 nbalacha joined #gluster
14:55 bennyturns joined #gluster
14:55 diegows joined #gluster
14:56 zhangjn joined #gluster
14:56 amye joined #gluster
14:59 rafi joined #gluster
15:01 haomaiwa_ joined #gluster
15:03 Eychenz joined #gluster
15:21 nishanth joined #gluster
15:23 cholcombe joined #gluster
15:28 kovshenin joined #gluster
15:29 poornimag joined #gluster
15:33 theron joined #gluster
15:34 maserati joined #gluster
15:40 dlambrig_ joined #gluster
15:49 Eychenz joined #gluster
15:50 ayma joined #gluster
15:51 bowhunter joined #gluster
15:54 Eychenz joined #gluster
15:55 Eychenz left #gluster
16:01 1JTAAEAUT joined #gluster
16:01 deniszh joined #gluster
16:25 volga629 joined #gluster
16:25 volga629 Hello Everyone, got issue to add new brick into existing volume
16:25 volga629 volume add-brick: failed: Incorrect number of bricks supplied 1 with count 2
16:26 volga629 2 already running
16:26 volga629 3 third one can't add probing worked no problem
16:27 wushudoin joined #gluster
16:28 volga629 gluster volume add-brick  datapoint02 replica 3 vg2.networklab.lan:/var/lib/vm_store/tmp give me volume add-brick: failed: /var/lib/vm_store/tmp is already part of a volume
16:38 volga629 getting this error volume add-brick: failed: Host vg2.networklab.lan is not in 'Peer in Cluster' state
16:38 calavera joined #gluster
16:42 rafi joined #gluster
16:44 harish joined #gluster
16:45 Slashman hello, I'm looking at the faq on the glusterfs website, I understand that the compatibility between major version is not supported, but it's not very clear if compatibility between minor version is, can I have glusterfs nodes with different version? can I have a client with 3.6.2 with the cluster running the server 3.6.6?
16:46 Jmainguy Slashman: yeah I think so
16:47 amye left #gluster
16:48 whereismyjetpack joined #gluster
16:49 whereismyjetpack when setting ssl auth on a gluster volume, in the docs 'Zaphod' is the identity that's aloud — is this the CN of a pem cert?
16:50 amye1 joined #gluster
16:53 Gill joined #gluster
16:58 volga629 why after the probe it set State: Accepted peer request (Connected) and not Peer in cluster ?
17:46 muneerse2 joined #gluster
17:54 lord4163_ joined #gluster
17:56 lord4163_ Dispersed volumes look interesting to me, is it stable? And the bitrot detection, does it work well?
17:58 ponchoco joined #gluster
17:58 ponchoco can anyone offer any gluster help?  i have a replica 2 volume with 4 bricks.  all 4 bricks are online, but the gluster logs say that quorum is not met. any ideas?
17:59 ponchoco quorum-type is set to auto
18:03 wushudoin joined #gluster
18:17 amye joined #gluster
18:18 shyam left #gluster
18:22 XpineX joined #gluster
18:34 Rapture joined #gluster
18:38 p0rtal joined #gluster
18:40 volga629 how to sync replicated vol after new brick is added ?
18:42 tomatto joined #gluster
18:43 B21956 joined #gluster
18:47 kovshenin joined #gluster
18:51 poornimag joined #gluster
18:59 ponchoco gluster volume rebalance VOLNAME start
19:03 julim joined #gluster
19:09 volga629 thanks it is Type: Replicate so should be only heal option I guess
19:12 kovshenin joined #gluster
19:23 shyam joined #gluster
19:32 lpabon joined #gluster
19:47 shaunm joined #gluster
19:49 vimal joined #gluster
19:56 David_Vargese joined #gluster
20:12 volga629 what is mean Number of entries: 0 on output form heal command ?
20:20 klaxa joined #gluster
20:38 tjohnson2 joined #gluster
20:39 tjohnson2 left #gluster
20:40 toddejohnson joined #gluster
20:41 calavera joined #gluster
20:48 timotheus1_ joined #gluster
20:52 tomatto joined #gluster
21:11 DV joined #gluster
21:15 BobJJ joined #gluster
21:16 BobJJ left #gluster
21:29 plarsen joined #gluster
21:29 mikemol joined #gluster
21:30 mikemol So, trying to install glusterfs-ganesha-3.7.6 from glusterfs-epel, and I get an error "Requries: nfs-ganesha-gluster". What repo, if any, should I expect to find that package in? Or do I need to build it myself?
21:30 plarsen joined #gluster
21:53 timotheus1__ joined #gluster
21:57 gzcwnk joined #gluster
21:58 ron-slc_ joined #gluster
22:07 DV joined #gluster
22:10 dlambrig_ joined #gluster
22:13 ira joined #gluster
22:20 deniszh joined #gluster
22:26 ron-slc joined #gluster
22:26 ron-slc_ joined #gluster
22:27 p0rtal joined #gluster
22:43 PinkFreud joined #gluster
22:43 PinkFreud hey all.  I'm having a weird issue with gluster 3.7.3 on production data.
22:45 PinkFreud 4 nodes, 2x2 configuration.
22:45 PinkFreud the brick shuts down, and the underlying local filesystem throws an I/O error.
22:46 PinkFreud only way I can get to the filesystem is to unmount it and remount.
22:54 gzcwnk so its at the operating system layer?
22:55 PinkFreud yes.
22:55 PinkFreud the bricks are VMs, though.
22:55 PinkFreud and the underlying hardware on the physical hosts is fine.
22:55 gzcwnk so you have 4 physical hosts? and 4 VMs?
22:55 PinkFreud correct.
22:56 PinkFreud this problem is happening only on a single brick for the moment.
22:56 gzcwnk what VM technology?
22:57 PinkFreud ESXi
22:57 PinkFreud we're also seeing this in one of the logs:
22:57 PinkFreud [2015-11-10 22:56:44.629371] W [socket.c:642:__socket_rwv] 0-management: readv on /var/run/gluster/9bdca02c03e8a4df7d0b69e4a69e95c6.socket failed (Invalid argument)
22:57 gzcwnk so ESXi is nfs mounted to the gluster cluster?
22:57 PinkFreud erk, no.
22:58 gzcwnk which esxi? 5.5?
22:58 PinkFreud standard VM deployment on ESXi.  the 'disk' is, as usual, a file on ESXi.
22:58 PinkFreud yes.
22:59 gzcwnk So you have 4 ESxi boxes each has a gluster VM on it? which makes a systesm?
22:59 PinkFreud yes.
22:59 gzcwnk neat
23:00 PinkFreud it'd be even neater if we could fix this issue.  :)
23:00 gzcwnk I was thinking of doing teh same thing but with 3
23:01 gzcwnk ive only just built a 2 node ubuntu test setup so I know diddly, sorry
23:02 gzcwnk maybe the mailing list?
23:03 gzcwnk can you swap vms? ie see if it follwos teh VM or stays with the ahrdware?
23:03 gzcwnk with 4.0 I used to get a sas panic occasionally so maybe make sure esxi is fully patched?
23:04 gzcwnk 5.5 seems way more stable
23:21 ctria joined #gluster
23:21 amye joined #gluster
23:24 amye joined #gluster
23:32 F2Knight joined #gluster
23:42 BrettM joined #gluster
23:42 BrettM hello
23:42 glusterbot BrettM: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
23:45 BrettM I need some pointers where to look for a problem:  I have a gluster 3.7.3 arbiter volume setup across three nodes.  when i start up one of the (non arbiter) brick servers I am getting a 30 second delay in creating a new file on the volume.   I have enable DEBUG and see that the delay is in fresh_lookup --- what are some key things I could check to diagnose further.
23:45 glusterbot BrettM: -'s karma is now -350
23:53 plarsen joined #gluster
23:58 Mr_Psmith joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary