Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-07-31

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 haomaiwa_ joined #gluster
00:15 spcmastertim joined #gluster
00:31 uebera|| joined #gluster
00:37 * PatNarciso_ has never setup a geo-sync.
00:37 PatNarciso_ do guid's remain the same between volumes?
00:38 PatNarciso_ s/geo-sync/geo-replication
00:59 nangthang joined #gluster
01:02 haomaiwang joined #gluster
01:09 Pupeno joined #gluster
01:11 nzero joined #gluster
01:14 nzero joined #gluster
01:17 victori joined #gluster
01:26 nangthang joined #gluster
01:31 Lee1092 joined #gluster
01:33 morph- anyone here running 3.7 in production yet?
01:45 nzero joined #gluster
01:52 cyberswat joined #gluster
02:02 haomaiwa_ joined #gluster
02:06 bharata-rao joined #gluster
02:13 nangthang joined #gluster
02:18 calisto joined #gluster
02:18 calisto joined #gluster
02:23 ct I updated one node in a replica 3 configuration to 3.7.3, the other nodes are still 3.7.2.   The 3.7.3 node cannot connect to the other peers.   There are reporting  [rpcsvc.c:638:rpcsvc_handle_rpc_call] 0-rpc-service: Request received from non-privileged port. Failing request
02:24 calisto left #gluster
02:26 cyberswat joined #gluster
02:28 gildub joined #gluster
02:29 sclarke joined #gluster
02:40 morph- hmm not much activity here :(
03:02 haomaiwa_ joined #gluster
03:08 mpietersen joined #gluster
03:13 kdhananjay joined #gluster
03:17 meghanam joined #gluster
03:21 PaulCuzner joined #gluster
03:23 jcastill1 joined #gluster
03:28 shubhendu joined #gluster
03:29 jcastillo joined #gluster
03:32 kdhananjay joined #gluster
03:44 cyberswat joined #gluster
03:51 overclk joined #gluster
03:51 atinm joined #gluster
03:58 bhubble joined #gluster
04:02 haomaiwa_ joined #gluster
04:02 TheSeven joined #gluster
04:05 bhubble1 joined #gluster
04:07 sakshi joined #gluster
04:09 RameshN joined #gluster
04:10 victori joined #gluster
04:10 kanagaraj joined #gluster
04:14 nishanth joined #gluster
04:16 yazhini joined #gluster
04:17 victori joined #gluster
04:18 cyberswat joined #gluster
04:18 overclk joined #gluster
04:25 harish_ joined #gluster
04:26 cyberswat joined #gluster
04:31 ppai joined #gluster
04:32 RameshN joined #gluster
04:35 nbalacha joined #gluster
04:36 aaronott joined #gluster
04:37 jwd joined #gluster
04:40 jwaibel joined #gluster
04:41 kshlm joined #gluster
04:50 ndarshan joined #gluster
04:50 overclk_ joined #gluster
04:51 calavera joined #gluster
04:58 deepakcs joined #gluster
04:59 uebera|| joined #gluster
05:01 ndarshan joined #gluster
05:02 haomaiwa_ joined #gluster
05:03 aaronott1 joined #gluster
05:03 pppp joined #gluster
05:08 kotreshhr joined #gluster
05:08 vimal joined #gluster
05:08 kotreshhr joined #gluster
05:13 gem_ joined #gluster
05:22 rafi joined #gluster
05:24 Saravana_ joined #gluster
05:27 kdhananjay joined #gluster
05:29 aravindavk joined #gluster
05:31 primusinterpares joined #gluster
05:33 kayn_ joined #gluster
05:35 Bhaskarakiran joined #gluster
05:37 prabu joined #gluster
05:45 anrao joined #gluster
05:48 SOLDIERz joined #gluster
05:50 uebera|| joined #gluster
05:51 jwd joined #gluster
05:52 jwaibel joined #gluster
05:54 sahina joined #gluster
05:55 Manikandan joined #gluster
05:55 hgowtham joined #gluster
05:56 smohan joined #gluster
05:56 kdhananjay joined #gluster
05:56 rjoseph joined #gluster
05:57 ashiq joined #gluster
05:59 hagarth joined #gluster
06:00 anil_ joined #gluster
06:02 18VAADU59 joined #gluster
06:04 skoduri joined #gluster
06:07 victori joined #gluster
06:11 dusmant joined #gluster
06:12 bhubble1 Hi is there anything special I hace to do to get ping.timeout to work ?
06:13 kdhananjay joined #gluster
06:14 aravindavk joined #gluster
06:15 KennethDejonghe morph I'm running 3.7 in production
06:23 vmallika joined #gluster
06:24 JoeJulian bhubble1: ping-timeout, not ping.timeout, maybe? It works though by default.
06:24 jtux joined #gluster
06:28 ppai joined #gluster
06:30 bhubble1 sorry that is what I was using I jsut checked
06:30 bhubble1 Do I need to disconnect the client before changing thses settings JoeJulian?
06:31 TvL2386 joined #gluster
06:33 nbalachandran_ joined #gluster
06:34 ramky joined #gluster
06:34 atalur joined #gluster
06:36 nangthang joined #gluster
06:40 spalai joined #gluster
06:41 overclk joined #gluster
06:51 overclk joined #gluster
06:51 dusmant joined #gluster
06:52 maveric_amitc_ joined #gluster
06:52 skoduri bhubble1, I guess not required. those option values should get dynamically updated
06:52 bhubble1 It was always defaulting to the 42s
06:53 skoduri even after changing the option value?
07:01 haomaiwa_ joined #gluster
07:05 Manikandan joined #gluster
07:07 bhubble1 yep
07:12 [Enrico] joined #gluster
07:13 overclk_ joined #gluster
07:21 anrao joined #gluster
07:21 jatb joined #gluster
07:24 Slashman joined #gluster
07:26 auzty joined #gluster
07:42 ctria joined #gluster
07:42 bhubble2 joined #gluster
07:47 RameshN joined #gluster
07:48 sakshi joined #gluster
07:54 Manikandan joined #gluster
07:55 overclk joined #gluster
07:58 arcolife joined #gluster
08:00 deniszh joined #gluster
08:02 haomaiwang joined #gluster
08:04 pkoro joined #gluster
08:04 nbalachandran_ joined #gluster
08:05 arcolife joined #gluster
08:09 arcolife joined #gluster
08:10 ajames-41678 joined #gluster
08:12 auzty i want to check the inode changing using "iwatch" , why when im changing the file in server1, i didn't get notify in server 2? both of them have same shared folder using gluster,
08:13 Pupeno joined #gluster
08:13 arcolife joined #gluster
08:14 ndevos auzty: I dont know of any network filesystem that supports inotify
08:15 ndevos aravindavk: did you not look into getting support for something like that
08:15 auzty sorry ndevos , i'm desperate, my server using foreverjs and didn't detect it when i change the file from other server :D
08:15 ndevos auzty: actually, with glusterfs-3.7 we do have the new upcall infrastructure that could be used to develop inotify support
08:16 auzty but at my production server, the gluster is works, i dunno what happened in my vagrant
08:16 auzty all of my setting are same, but the foreverjs cannot detect the file are changing ._.
08:16 iliv joined #gluster
08:17 iliv left #gluster
08:17 ndevos auzty: gluster does not send notifications when a file changes... I dont know how it could work at all without that
08:18 auzty thanks ndevos :D
08:19 ndevos auzty: you could file a bug and ask for the iwatch/inotify feature, technically I think we can write the code for it
08:19 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
08:20 auzty wow big thanks ndevos :)
08:20 auzty okok
08:21 Pupeno joined #gluster
08:29 RameshN joined #gluster
08:30 kdhananjay joined #gluster
08:39 jcastill1 joined #gluster
08:43 aravindavk ndevos: Looks like inotify will not work on gluster mount. We can enhance upcall infrastructure to provide inotify feature I guess
08:44 jcastillo joined #gluster
08:45 ppai joined #gluster
08:48 overclk joined #gluster
08:50 arcolife joined #gluster
08:55 spalai joined #gluster
08:59 anrao joined #gluster
09:02 haomaiwa_ joined #gluster
09:02 [Enrico] joined #gluster
09:07 ppai joined #gluster
09:09 dusmant joined #gluster
09:18 nishanth joined #gluster
09:19 overclk joined #gluster
09:20 sahina joined #gluster
09:20 hflai joined #gluster
09:20 ndarshan joined #gluster
09:22 shubhendu joined #gluster
09:33 ramky joined #gluster
09:37 TvL2386 joined #gluster
09:43 kaushal_ joined #gluster
09:44 s19n joined #gluster
09:47 sakshi joined #gluster
10:01 hagarth aravindavk: can we not use glusterfind here?
10:04 overclk joined #gluster
10:06 ppai joined #gluster
10:08 anrao joined #gluster
10:12 ramky joined #gluster
10:15 LebedevRI joined #gluster
10:15 harish_ joined #gluster
10:19 schandra joined #gluster
10:19 jcastill1 joined #gluster
10:19 schandra hagarth: there?
10:19 sahina joined #gluster
10:20 sakshi joined #gluster
10:21 [Enrico] joined #gluster
10:24 jcastillo joined #gluster
10:29 sage_ joined #gluster
10:31 rafi1 joined #gluster
10:34 dusmant joined #gluster
10:36 shubhendu joined #gluster
10:36 sahina joined #gluster
10:40 skoduri joined #gluster
10:42 Manikandan joined #gluster
10:47 nsoffer joined #gluster
10:49 aravindavk hagarth: not realtime :(
10:49 sakshi joined #gluster
10:49 hagarth aravindavk: what is the lowest granularity glusterfind can provide?
11:00 gem joined #gluster
11:02 spalai joined #gluster
11:11 arao joined #gluster
11:13 sakshi joined #gluster
11:16 Manikandan joined #gluster
11:19 kotreshhr left #gluster
11:26 firemanxbr joined #gluster
11:33 overclk joined #gluster
11:35 arao joined #gluster
11:37 aravindavk hagarth: have to measure. depends on type of operation. if entry followed by data it will be quick. But if only data then takes more time because of GFID to path conversion
11:49 byreddy_ joined #gluster
11:49 rafi joined #gluster
11:50 cyberswat joined #gluster
11:51 gem joined #gluster
11:52 Slashman joined #gluster
11:53 arcolife joined #gluster
11:54 overclk joined #gluster
11:54 skoduri joined #gluster
11:57 Manikandan joined #gluster
12:01 autoditac joined #gluster
12:02 sakshi joined #gluster
12:05 renm joined #gluster
12:05 autoditac_ joined #gluster
12:11 ndarshan joined #gluster
12:14 hagarth joined #gluster
12:14 jtux joined #gluster
12:16 arao joined #gluster
12:21 Manikandan joined #gluster
12:35 Mr_Psmith joined #gluster
12:35 kotreshhr joined #gluster
12:39 theron joined #gluster
12:40 kotreshhr left #gluster
12:44 calisto joined #gluster
12:44 plarsen joined #gluster
12:46 arao joined #gluster
12:52 renm Hi all. As glusterd effectively replaces the original nfs daemon what is the recommended way to setup a non-glustered nfs share? Perhaps a single brick local gluster volume?
12:54 spalai joined #gluster
12:54 ndevos renm: if you do not want to use the gluster/nfs server, you should look at nfs-ganesha, it is not recommended to use the Linux kernel nfs-server
12:56 renm ndevos: I want to use the gluster system as I am using ovirt on top of it. However I want to be able to have a normal nfs share that I can use for tftp
12:57 ndevos renm: right, in a case where you also want non-glusterfs exports, nfs-ganesha can be used, it supports local filesystems but also gluster volumes
12:58 renm ndevos: Ahhh.. thank you I will have a look
12:58 ndevos renm: or, you could indeed create a single-brick volume for your tftp files, and export that through gluster/nfs
12:59 renm ndevos: Is there any disadvantage to doing it with the single brick instead of with nfs-ganesha?
12:59 unclemarc joined #gluster
13:00 smohan joined #gluster
13:00 ndevos renm: no, I cant think of any disadvantages, gluster/nfs is a little easier to setup, but nfs-ganesha has more features
13:00 overclk joined #gluster
13:02 renm ndevos: Ok thanks - I will give both ways a try
13:04 overclk_ joined #gluster
13:10 harish_ joined #gluster
13:13 aaronott joined #gluster
13:14 bennyturns joined #gluster
13:15 spalai joined #gluster
13:30 spalai joined #gluster
13:34 ajames-41678 joined #gluster
13:35 rjoseph joined #gluster
13:37 spalai joined #gluster
13:37 harold joined #gluster
13:39 cyberswat joined #gluster
13:42 B21956 joined #gluster
13:44 arao joined #gluster
13:44 overclk joined #gluster
13:47 harish_ joined #gluster
13:51 smohan joined #gluster
13:52 B21956 joined #gluster
14:00 mpietersen joined #gluster
14:01 spalai joined #gluster
14:10 nage joined #gluster
14:27 arao joined #gluster
14:36 abyss^ joined #gluster
14:51 victori joined #gluster
14:52 arao joined #gluster
14:55 marbu joined #gluster
14:55 nbalachandran_ joined #gluster
15:04 arao joined #gluster
15:06 mator rhss 3.1 release 29/07/2015 based on glusterfs-3.7.1
15:09 _maserati joined #gluster
15:11 _maserati_ joined #gluster
15:26 samikshan joined #gluster
15:26 tdasilva joined #gluster
15:28 nsoffer joined #gluster
15:29 mbukatov joined #gluster
15:30 cc1 joined #gluster
15:34 squizzi_ joined #gluster
15:35 kayn_ joined #gluster
15:38 calavera joined #gluster
15:38 nzero joined #gluster
15:41 _maserati_ ive managed to lose a gluster client in a very uncoordinated shift of IP space of our dev environment. The gluster servers themselves are up and good, is there a way i can tell what past gluster clients were connecting in?
15:43 skoduri joined #gluster
15:45 ndevos mator: yes, +backports so it gets closer to glusterfs-3.7.3
15:45 ndevos _maserati_: you could check the brick logs, not sure if there is an other way
15:48 nzero joined #gluster
15:49 cyberswat joined #gluster
15:50 kayn_ joined #gluster
16:13 arao joined #gluster
16:15 chirino joined #gluster
16:15 cholcombe joined #gluster
16:25 s19n left #gluster
16:34 _maserati joined #gluster
16:37 victori joined #gluster
17:01 nzero joined #gluster
17:02 overclk joined #gluster
17:10 victori joined #gluster
17:15 arao joined #gluster
17:25 aravindavk joined #gluster
17:33 abyss^ joined #gluster
17:35 Mr_Psmith joined #gluster
18:03 ctria joined #gluster
18:05 nzero joined #gluster
18:21 uebera|| joined #gluster
18:23 neofob joined #gluster
18:26 nzero joined #gluster
18:26 kayn_ joined #gluster
18:54 cc1 joined #gluster
19:02 jobewan joined #gluster
19:03 cc1 joined #gluster
19:07 jwd joined #gluster
19:14 ramky joined #gluster
19:16 aaronott1 joined #gluster
19:19 tesaf joined #gluster
19:20 tesaf hurm
19:20 tesaf I'm having timeout issues while trying to create a pool
19:20 tesaf any ideas?
19:20 cc1 tesaf: what version are you running? I've been dealing with the same thing.
19:20 theron joined #gluster
19:21 tesaf 3.7.3
19:22 cc1 I'm having the same issues with the same version. 2 CentOS 6 boxes on the same subnet.
19:22 tesaf yeah, same ^^
19:23 tesaf what cent version?
19:23 cc1 6.6
19:24 tesaf interesting. same exact setup
19:24 JoeJulian cc1: I was going to ask you something last night, but you'd logged off... what was it... :(
19:24 JoeJulian cc1: But your "lab" set up that doesn't have this problem is still the same distro, right?
19:25 tesaf My lab setup works but was on 6.5- prod on 6.6 and no dice.
19:25 cc1 JoeJulian: Both on VMware though lab is just esxi
19:25 tesaf same ^^ :)
19:25 cc1 hmm what kernal version tesaf
19:33 _maserati JoeJulian, would you have any idea on how I could find out which gluster clients have previously connected to my gluster servers? Someone at my company 'lost' them and I can't track them down
19:41 ctria joined #gluster
19:41 JoeJulian _maserati: if you have sufficient log history, you could parse that. Brick logs are in /var/log/glusterfs/bricks
19:42 _maserati thanks i'll look
19:44 _maserati And if i cant find these stupid clients, is it fairly easy to drop a gluster client on one of the gluster servers and just point my virtual hostnames to it?
19:55 JoeJulian it's easy to mount the volume on the server, yes.
20:11 arao joined #gluster
20:15 _maserati JoeJulian, wait, do I have to install the gluster client to mount the volume on the server?
20:16 JoeJulian it's already installed.
20:17 JoeJulian The server makes use of the client for operations like heal and rebalance.
20:17 _maserati ooo
20:17 nzero check if you have mount.glusterfs
20:17 _maserati nzero, sure do
20:17 nzero example: mount -t glusterfs cluster1:/gv0 /gluster/gv0/
20:20 _maserati oh boy, it looks like my predecessor already did this: codncr-st-2410.somesite.com:/dev-volume /mnt/gluster glusterfs defaults,_netdev 0 0
20:21 _maserati now to share out to a windows client, use samba and point to /mnt/gluster  is that the correct way?
20:22 nzero yes
20:22 nzero http://www.gluster.org/community/documentation/index.php/Gluster_3.2:_Exporting_Gluster_Volumes_Through_Samba
20:22 JoeJulian no
20:22 JoeJulian use the hooks and vfs.
20:23 _maserati alien to me JoeJulian, which document can i find that process in?
20:23 nzero https://lalatendumohanty.wordpress.com/2014/04/20/glusterfs-vfs-plugin-for-samba/
20:25 JoeJulian I was looking for documentation... <sigh> If you're using an rpm based distro, it should be fairly automatic.
20:26 _maserati How can I check if my predecessor used VFS or Samba on our other environment?
20:26 _maserati or rather -just samba-
20:27 * JoeJulian grumbles about undocumented features...
20:27 JoeJulian https://github.com/gluster/glusterfs/blob/58a687a8e0967393428bc5f93f0d32bbc3792f88/extras/hook-scripts/start/post/S30samba-start.sh
20:27 glusterbot Title: glusterfs/S30samba-start.sh at 58a687a8e0967393428bc5f93f0d32bbc3792f88 · gluster/glusterfs · GitHub (at github.com)
20:27 JoeJulian Not actually documentation, but it's a bash script so you should be able to see how it works pretty easily.
20:28 JoeJulian If he re-shared a fuse mount, your users will be amazed and wonder how you made it so much faster.
20:37 _maserati Just verifying, this is the correct place to put that script: /var/lib/glusterd/hooks/1/start/post
20:37 JoeJulian Should be there already
20:37 _maserati tis not
20:37 _maserati nothing is in there
20:38 nzero what version of gluster are you using
20:39 _maserati 3.4.0
20:39 _maserati wow
20:39 _maserati didnt realize our dev environment was so behind
20:39 JoeJulian Ah, nevermind then...
20:39 _maserati just use regular samba share for now?
20:39 JoeJulian fuse mount and reshare it is.
20:39 _maserati coo
20:40 JoeJulian And then schedule that upgrade. 3.4.0 has memory leaks, and all sort of other horrible bugs.
20:40 elico joined #gluster
20:41 michaeljk joined #gluster
20:43 ToMiles joined #gluster
20:43 mkzero joined #gluster
20:43 _maserati fml i hate catering to windows clients
20:45 jbautista- joined #gluster
20:46 JoeJulian +1
20:46 badone joined #gluster
20:47 neofob left #gluster
20:51 jbautista- joined #gluster
20:51 _Bryan_ joined #gluster
21:05 kayn_ joined #gluster
21:14 autoditac joined #gluster
21:20 calavera joined #gluster
21:23 autoditac joined #gluster
21:40 cc1 left #gluster
21:40 calavera joined #gluster
21:42 doekia joined #gluster
21:49 calisto joined #gluster
21:53 ToMiles joined #gluster
21:56 michaeljk Short question: If I have 2 nodes, each of it with 2 harddisks in 2 mountpoints. Now I create a distributed replicated volume: "gluster volume create data replica 2 transport tcp gluster1.local:/mnt/data1/data gluster2.local:/mnt/data1/data gluster1.local:/mnt/data2/data gluster2.local:/mnt/data2/data" - If I upload a file, will it be replicated to both hosts, or just to 2 different mount points?
21:57 JoeJulian michaeljk: both hosts. See ,,(brick order)
21:57 glusterbot michaeljk: Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.
21:58 michaeljk perfect - thank you! :)
22:06 autoditac joined #gluster
22:17 kotreshhr joined #gluster
22:20 kotreshhr left #gluster
22:20 kotreshhr joined #gluster
22:22 edwardm61 joined #gluster
22:51 theron joined #gluster
22:56 theron joined #gluster
22:57 kotreshhr left #gluster
23:03 calisto left #gluster
23:15 plarsen joined #gluster
23:21 victori joined #gluster
23:33 cleong joined #gluster
23:34 cleong left #gluster
23:54 Banko joined #gluster
23:54 Banko Hello, does anyone know if you can run a full heal, and a rebalance at the same time? Or will that cause issues.
23:55 JoeJulian Doesn't seem necessary. A rebalance will walk the tree and heal anything that needs healing.
23:56 Banko hmm, what if you are already in the middle of a heal ?
23:56 Banko especially when due to earlier problems at one point, the following options were set:
23:56 Banko cluster.metadata-self-heal: off
23:56 Banko cluster.entry-self-heal: off
23:56 Banko cluster.data-self-heal: off
23:56 JoeJulian Ah, well then...
23:56 JoeJulian If it was me, I'd wait.
23:57 Banko is there a way to actually stop a heal that was started by the heal full command ?
23:57 JoeJulian I don't trust rebalance anyway, especially not if the files are not in a known state.
23:57 Banko only reason is because i want to try a rebalance with those options turned on, because we had a node a brick down for 3 months without noticing (yeah i know)
23:57 Banko and its been healing for 3 days and it isn't even close to being done
23:57 JoeJulian ouch
23:58 nangthang joined #gluster
23:58 JoeJulian Frankly, I'd still let the heal finish first.
23:58 Banko and only reason someone noticed is because the volume needs to be expanded since it is getting full
23:58 Mr_Psmith joined #gluster
23:58 JoeJulian ... and I don't know any way to cancel the full heal.
23:59 jcastill1 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary