Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-08

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:07 shyam joined #gluster
00:08 brian joined #gluster
00:08 CyrilPeponnet @JoeJulian I have a weird issue
00:09 CyrilPeponnet one of my volume is not mounting on client using fuse anymore
00:09 CyrilPeponnet nothing in logs it just return 1
00:09 JoeJulian is the disk full?
00:09 CyrilPeponnet nope
00:10 CyrilPeponnet other vol are fine
00:10 CyrilPeponnet just this one
00:10 CyrilPeponnet but machine with already mounted vol are fine
00:10 CyrilPeponnet could it be related to the option we use this morning ?
00:10 JoeJulian I've been using that option without any problems, so I don't think so.
00:12 JoeJulian If you look at the other mounts in ps, you can see the format of the command line (that's all mount.glusterfs does is build that command line). If you form your command line similarly to mount the volume you want, you can add --debug to run it in the foreground.
00:12 CyrilPeponnet https://gist.github.com/CyrilPeponnet/49d30050d4db47352882
00:12 glusterbot Title: gist:49d30050d4db47352882 · GitHub (at gist.github.com)
00:12 CyrilPeponnet yes but nothing relevant for me at least
00:12 CyrilPeponnet maybe 0-glusterfs: ping timeout is 0, returning
00:12 CyrilPeponnet not sure
00:12 zhangjn joined #gluster
00:13 JoeJulian maybe
00:14 JoeJulian What version are you on again? I'll look at the source for that function and see what it means.
00:14 CyrilPeponnet if I unmount I can't mount again
00:14 CyrilPeponnet 3.6.5
00:14 CyrilPeponnet I will try to revert it to see
00:16 CyrilPeponnet this doesn't help... I gluster vol reset usr_global cluster.read-hash-mode but no way once unmounted it's gone
00:19 CyrilPeponnet no issue with other vol just this one
00:20 JoeJulian looks like that ping-timeout line is normal
00:21 JM_ joined #gluster
00:22 CyrilPeponnet this is insane...
00:25 CyrilPeponnet /usr/sbin/glusterfs --read-only --enable-ino32 --volfile-server=serverblabla --volfile-id=/usr_global /usr/global  return 1
00:25 muneerse joined #gluster
00:26 CyrilPeponnet @JoeJulian is there a mode TRACE ?
00:27 JoeJulian there is
00:27 JoeJulian log-level
00:28 plarsen joined #gluster
00:29 CyrilPeponnet options.c:148:xlator_option_validate_sizet] 0-usr_global-io-cache: no range check required for 'option max-file-size 50MB'
00:29 CyrilPeponnet doesn't look an error
00:31 CyrilPeponnet @JoeJulian  https://gist.github.com/CyrilPeponnet/49d30050d4db47352882 updated in trace mode... if you can help...
00:32 JoeJulian OH! geez... That's why we don't get anything. The log level's changed before it dies.
00:32 JoeJulian I told them that was a bug.
00:32 CyrilPeponnet ?
00:32 JoeJulian If you reset the log-level changes you make with the volume settings, you should be able to find out why it's ending.
00:33 JoeJulian I argued that command line log-level should override that.
00:34 CyrilPeponnet hmm let me check
00:35 CyrilPeponnet my clients will starts to log like hell
00:35 CyrilPeponnet quick-read.c:823:check_cache_size_ok] 0-usr_global-quick-read: Cache size 4294967296 is greater than the max size of 4017291264
00:35 CyrilPeponnet ok at least I have something here
00:36 CyrilPeponnet @JoeJulian https://gist.github.com/CyrilPeponnet/49d30050d4db47352882 updated
00:36 CyrilPeponnet I think I got it
00:36 CyrilPeponnet Cache size 4294967296 is greater than the max size of 4017291264
00:37 CyrilPeponnet gluster vol set usr_global performance.cache-size 4017291264
00:38 CyrilPeponnet should fix it
00:38 DV joined #gluster
00:38 CyrilPeponnet YES
00:41 jamesc joined #gluster
00:42 dgbaley joined #gluster
00:43 CyrilPeponnet @JoeJulian ok I have other issue with 3.6.4 client where cache size 4017291264 is greater than the max size of 2976333824
00:44 CyrilPeponnet looks like cache size 4017291264 is for 3.6.5
00:44 CyrilPeponnet what a pain...
00:45 CyrilPeponnet for the record the day when gluster going made is the day I try to set the cache to 8GB
00:46 CyrilPeponnet since then, he's insane.
00:46 CyrilPeponnet @JoeJulian updating to 3.6.5 doesn't help cache size 4017291264 is greater than the max size of 2976333824
00:47 JoeJulian I have no idea where that number comes from.
00:48 JoeJulian 0xB167400 doesn't seem like a logical limit.
00:48 CyrilPeponnet must be computed somehow
00:49 CyrilPeponnet [quick-read.c:818:check_cache_size_ok] 0-usr_global-quick-read: Max cache size is 2976333824
00:50 CyrilPeponnet https://github.com/gluster/glusterfs/blob/0773ca67fdb60a142207759fa6c07a69882ce59c/xlators/performance/io-cache/src/io-cache.c#L1633
00:50 glusterbot Title: glusterfs/io-cache.c at 0773ca67fdb60a142207759fa6c07a69882ce59c · gluster/glusterfs · GitHub (at github.com)
00:50 CyrilPeponnet is cache client side or server side
00:50 JoeJulian page_size = sysconf (_SC_PAGESIZE);
00:50 JoeJulian num_pages = sysconf (_SC_PHYS_PAGES);
00:50 JoeJulian memsize = page_size * num_pages;
00:51 CyrilPeponnet bad for your karma ;p
00:51 suliba joined #gluster
00:51 CyrilPeponnet is but is it on client or server side ?
00:51 JoeJulian yeah, it's only 3 lines.
00:52 JoeJulian Looks like quick-read is in the fuse vol, so client-side.
00:52 CyrilPeponnet make sense it is client side
00:52 CyrilPeponnet hmmm
00:53 CyrilPeponnet so ok I set a value in the vol but somehow some vol can't handle this on mount so it fails
00:53 CyrilPeponnet and also this also need a remount
00:53 JoeJulian Because the kernel says there's not enough free memory.
00:53 CyrilPeponnet so the option I set for now is useless until I remount all my clients :p
00:53 CyrilPeponnet (yes I get it now)
00:54 CyrilPeponnet well...
00:54 CyrilPeponnet at least today was an almost good day
00:54 JoeJulian I'd file a bug on that. Our expectation as admins is that a cli change takes effect immediately.
00:54 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
00:54 CyrilPeponnet thanks to you @JoeJulian it explain lot of things
00:55 CyrilPeponnet ok cool
00:55 JoeJulian Though I suppose you should be glad that all your clients didn't immediately crash. :D
00:55 CyrilPeponnet sure :)
00:55 JoeJulian That check is actually pretty lame.
00:56 CyrilPeponnet and as fus doesn't handle -o remount, I will have to salt a good old unmount -l /usr/global && mount /usr/global
00:57 CyrilPeponnet cache-size should be a limit
00:57 CyrilPeponnet set by the server
00:57 JoeJulian Oh, it actually does reconfigure immediately.
00:57 CyrilPeponnet and then the client is calling proper cache size
00:57 CyrilPeponnet calculating
00:57 CyrilPeponnet really ?
00:57 JoeJulian Your logs would have had an error, "Not reconfiguring cache-size"
00:57 CyrilPeponnet ok
00:57 CyrilPeponnet if I had logs
00:57 CyrilPeponnet :)
00:59 JoeJulian Go get some dinner and a good rest. :D
00:59 CyrilPeponnet sure thanks again
01:00 CyrilPeponnet I owe you a bear if I stop by seattle
01:00 CyrilPeponnet beer
01:00 CyrilPeponnet :p
01:08 nangthang joined #gluster
01:20 vimal joined #gluster
01:35 gnudna joined #gluster
01:35 shyam joined #gluster
01:35 Lee1092 joined #gluster
01:40 gnudna left #gluster
01:52 gildub joined #gluster
02:06 Ru57y joined #gluster
02:10 haomaiwa_ joined #gluster
02:10 haomaiwang joined #gluster
02:18 dlambrig_ joined #gluster
02:28 nangthang joined #gluster
02:34 jdossey joined #gluster
02:35 rafi joined #gluster
02:39 theron joined #gluster
02:55 dlambrig_ left #gluster
03:10 skylar1 joined #gluster
03:10 skylar joined #gluster
03:15 nishanth joined #gluster
03:15 haomaiwa_ joined #gluster
03:20 taolei joined #gluster
03:23 taolei Hi, when I create a volume (disperse, 6 + 2), I get the prompt saying "This configuration is not optimal on most workloads.", but no more infomation provided, I don't know the reason. Any help, please?
03:26 auzty joined #gluster
03:29 shubhendu joined #gluster
03:32 nishanth joined #gluster
03:36 TheSeven joined #gluster
03:38 RameshN joined #gluster
03:47 kotreshhr joined #gluster
03:47 kotreshhr left #gluster
03:48 hagarth joined #gluster
03:52 shubhendu joined #gluster
03:58 neha_ joined #gluster
04:02 atinm joined #gluster
04:02 haomaiwa_ joined #gluster
04:04 EinstCrazy joined #gluster
04:07 armyriad joined #gluster
04:09 badone__ joined #gluster
04:10 cuqa_ joined #gluster
04:18 rafi joined #gluster
04:19 nbalacha joined #gluster
04:27 skylar1 joined #gluster
04:29 ashiq joined #gluster
04:35 kotreshhr joined #gluster
04:35 kotreshhr left #gluster
04:36 ashiq joined #gluster
04:36 Manikandan joined #gluster
04:39 kanagaraj joined #gluster
04:40 ppai joined #gluster
04:42 jiffin joined #gluster
04:46 sakshi joined #gluster
04:48 yazhini joined #gluster
04:54 ndarshan joined #gluster
05:02 bharata-rao joined #gluster
05:02 haomaiwa_ joined #gluster
05:02 deniszh left #gluster
05:02 GB21 joined #gluster
05:04 pppp joined #gluster
05:05 gem joined #gluster
05:14 EinstCrazy joined #gluster
05:21 kotreshhr joined #gluster
05:27 poornimag joined #gluster
05:27 haomaiwa_ joined #gluster
05:28 hgowtham joined #gluster
05:30 maveric_amitc_ joined #gluster
05:34 EinstCrazy joined #gluster
05:35 deepakcs joined #gluster
05:47 kdhananjay joined #gluster
05:48 kdhananjay joined #gluster
05:50 GB21 joined #gluster
05:55 haomaiw__ joined #gluster
05:56 haomaiwa_ joined #gluster
05:58 haomaiwa_ joined #gluster
06:01 haomaiwang joined #gluster
06:01 Bhaskarakiran joined #gluster
06:11 jwd joined #gluster
06:14 raghu joined #gluster
06:17 mhulsman joined #gluster
06:17 jtux joined #gluster
06:18 ramky joined #gluster
06:27 nangthang joined #gluster
06:27 hagarth joined #gluster
06:29 kayn joined #gluster
06:38 vmallika joined #gluster
06:42 twisted` joined #gluster
06:43 fsimonce joined #gluster
06:44 malevolent joined #gluster
06:47 hagarth joined #gluster
06:47 gem joined #gluster
06:48 rgustafs joined #gluster
06:49 mhulsman joined #gluster
06:51 suliba joined #gluster
06:55 atalur joined #gluster
07:01 haomaiwa_ joined #gluster
07:03 haomaiwang joined #gluster
07:04 LebedevRI joined #gluster
07:06 sakshi joined #gluster
07:06 nangthang joined #gluster
07:08 deniszh joined #gluster
07:09 Saravana_ joined #gluster
07:10 gem joined #gluster
07:15 harish_ joined #gluster
07:15 rraja joined #gluster
07:28 cristian joined #gluster
07:28 Raide joined #gluster
07:29 skoduri joined #gluster
07:33 haomaiwa_ joined #gluster
07:34 LebedevRI joined #gluster
07:39 aravindavk joined #gluster
07:43 [Enrico] joined #gluster
07:46 jdossey joined #gluster
07:47 mbukatov joined #gluster
07:47 Slashman joined #gluster
07:48 dpetrov morning
07:55 sakshi joined #gluster
08:01 haomaiwa_ joined #gluster
08:15 Raide joined #gluster
08:18 mhulsman1 joined #gluster
08:34 Raide joined #gluster
08:34 ParsectiX joined #gluster
08:35 ParsectiX Hi guys. Where can I find documentation how to write a .vol file for gluster client ?
08:39 xavih joined #gluster
08:44 jdossey joined #gluster
08:44 jiffin ParsectiX: Maybe this will be helpful http://www.gluster.org/community/documentation/index.php/Understanding_Vol_Files
08:46 ccoffey joined #gluster
08:51 ParsectiX jiffin: Thanks for the info. Actually I have the same vol file like the example. Manually I can mount my brick. When I use a .vol file. I get Mount failed. Please check the log file for more details.
08:51 ParsectiX Trying to troubleshoot the error
08:52 ParsectiX in journal I get systemd[1]: Unit volume-gv0.mount entered failed state.
08:53 kovshenin joined #gluster
08:54 dpetrov guys, I am a bit confused and I'd appreciate if someone can clear this for me
08:54 dpetrov when we have a volume type replicated
08:55 dpetrov do we effectively have two gluster "servers" which are also clients?
08:56 dpetrov or we have a classic client/server environment, where one of the devices is acting as server and replicates data across the clients?
08:56 ParsectiX Guys did anyone tried to configure GlusterFS server using floating ips? I have problem when I try to add nodes to the pool
08:56 dgbaley dpetrov: The client writes to all servers
08:56 Raide joined #gluster
08:56 dpetrov ah, okay...
08:57 Simmo Good morning to everyone : )
08:57 dpetrov in this case then ..
08:57 dgbaley dpetrov: I've seen talk about a backend network where the servers handle replication, but AFAIK that's not how it is now
08:57 dpetrov right..
08:57 dpetrov okay, in this case then
08:57 dpetrov hm .. I am a bit confused now
08:57 dpetrov so I have a pretty straight-forward setup
08:58 dpetrov 2 servers, one volume (replicated)
08:58 dpetrov I had to reload the client yesterday
08:58 dpetrov and as a result
08:58 dpetrov I am unable to fetch any information about the gluster volume now
08:58 dpetrov # gluster volume info
08:58 dpetrov Connection failed. Please check if gluster daemon is operational.
08:59 dpetrov the glusterd is running
08:59 dpetrov and what is more interesting - the volume is mounted
08:59 dpetrov and I am able to see all the data being correctly replicated
08:59 dgbaley You're running that on a client system or on one of the servers?
08:59 dpetrov this is the client
09:00 dgbaley hmm, I've only ever ran that on the server because I thought it connects via a socket by default
09:01 dpetrov I think it should be operational on all members
09:01 dpetrov and it used to be ..
09:01 haomaiwa_ joined #gluster
09:01 dpetrov furthermore, on the server I see this:
09:01 dpetrov ~# gluster peer status
09:01 dpetrov Number of Peers: 1
09:01 dpetrov Hostname: rom
09:01 dpetrov Uuid: 3d31db39-4230-490d-9577-04dc0e4be4ef
09:01 dpetrov State: Peer in Cluster (Disconnected)
09:02 dusmant joined #gluster
09:02 dpetrov so the state is disconnected..
09:05 sakshi joined #gluster
09:12 deniszh joined #gluster
09:12 Raide joined #gluster
09:15 dpetrov any thoughts?
09:17 RayTrace_ joined #gluster
09:17 dgbaley Yes: /var/log/glusterfs
09:18 dgbaley Make sure your daemons are running correctly on both systems. Can they even ping each other?
09:18 dpetrov as I said, even the replication is working fine at the moment..
09:18 dpetrov so, yes. they can ping each other
09:18 dpetrov in the error logs I got this
09:19 dpetrov [2015-10-08 09:16:14.355717] E [name.c:147:client_fill_address_family] 0-glusterfs: transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options
09:19 dpetrov [2015-10-08 09:16:16.356542] I [socket.c:2255:socket_event_handler] 0-transport: disconnecting now
09:19 dpetrov [2015-10-08 09:16:17.357070] W [dict.c:1055:data_to_str] (-->/usr/lib/i386-linux-gnu/glusterfs/3.5.2/rpc-transport/socket.so(+0x3fe7) [0xb5fd8fe7] (-->/usr/lib/i386-linux-gnu/glusterfs/3.5.2/rpc-transport/socket.so(socket_client_get_remote_sockaddr+0x5f) [0xb5fdf33f] (-->/usr/lib/i386-linux-gnu/glusterfs/3.5.2/rpc-transport/socket.so(client_fill_address_family+0x1fc) [0xb5fdefcc]))) 0-dict: data is NULL
09:19 glusterbot dpetrov: ('s karma is now -112
09:19 glusterbot dpetrov: ('s karma is now -113
09:19 glusterbot dpetrov: ('s karma is now -114
09:20 RayTrace_ joined #gluster
09:21 atalur joined #gluster
09:22 mhulsman joined #gluster
09:22 poornimag joined #gluster
09:27 dgbaley Sorry, I don't know off the top of my head. You'll just have to do more digging, you'll figure it out =)
09:40 stickyboy joined #gluster
09:45 Saravana_ joined #gluster
09:46 arcolife joined #gluster
09:49 mhulsman1 joined #gluster
09:54 Philambdo joined #gluster
09:56 harish_ joined #gluster
09:59 RayTrace_ joined #gluster
10:01 haomaiwa_ joined #gluster
10:03 ppai joined #gluster
10:04 Raide joined #gluster
10:06 techmadmin joined #gluster
10:06 DRoBeR joined #gluster
10:09 GB21 joined #gluster
10:14 kbyrne joined #gluster
10:14 Raide joined #gluster
10:16 RayTrace_ joined #gluster
10:24 Raide joined #gluster
10:27 kovshenin joined #gluster
10:30 Jules- joined #gluster
10:30 Raide joined #gluster
10:36 vmallika joined #gluster
10:38 Slashman joined #gluster
10:42 vmallika joined #gluster
10:46 dpetrov left #gluster
10:52 kovshenin joined #gluster
10:55 corretico joined #gluster
10:58 Raide joined #gluster
11:01 haomaiwa_ joined #gluster
11:02 Saravana_ joined #gluster
11:03 RayTrace_ joined #gluster
11:04 RayTrac__ joined #gluster
11:08 nbalacha joined #gluster
11:08 ppai joined #gluster
11:12 Saravana_ joined #gluster
11:12 bluenemo joined #gluster
11:16 Philambdo joined #gluster
11:20 Raide joined #gluster
11:20 kkeithley1 joined #gluster
11:20 mhulsman joined #gluster
11:24 techmadmin left #gluster
11:26 ppai joined #gluster
11:32 ira joined #gluster
11:37 Saravana_ joined #gluster
11:39 poornimag joined #gluster
11:40 davidself joined #gluster
11:41 Pupeno_ joined #gluster
11:43 rafi joined #gluster
11:48 RayTrace_ joined #gluster
11:51 mhulsman1 joined #gluster
11:51 Pupeno joined #gluster
12:09 marbu joined #gluster
12:09 skylar joined #gluster
12:10 kotreshhr left #gluster
12:10 nbalacha joined #gluster
12:14 ppai joined #gluster
12:16 poornimag joined #gluster
12:19 jtux joined #gluster
12:26 Saravana_ joined #gluster
12:32 RayTrac__ joined #gluster
12:34 rjoseph joined #gluster
12:34 DV__ joined #gluster
12:34 javi404 joined #gluster
12:35 unclemarc joined #gluster
12:36 GB21 joined #gluster
12:38 nishanth joined #gluster
12:39 ppai joined #gluster
12:39 mpietersen joined #gluster
12:40 spcmastertim joined #gluster
12:42 julim joined #gluster
12:42 rafi1 joined #gluster
12:52 shyam joined #gluster
12:54 DV__ joined #gluster
12:58 B21956 joined #gluster
13:00 bennyturns joined #gluster
13:03 haomaiwa_ joined #gluster
13:04 kdhananjay joined #gluster
13:06 firemanxbr joined #gluster
13:06 RayTrace_ joined #gluster
13:07 pkoro joined #gluster
13:18 hgowtham joined #gluster
13:19 harold joined #gluster
13:20 maveric_amitc_ joined #gluster
13:25 skylar joined #gluster
13:32 mbukatov joined #gluster
13:35 jwaibel joined #gluster
13:39 dusmant joined #gluster
13:41 aravindavk joined #gluster
13:41 dgandhi every time I look at this channel dpetrov is trashing ('s karma (++
13:41 glusterbot dgandhi: ('s karma is now -113
13:41 kotreshhr joined #gluster
13:44 maveric_amitc_ joined #gluster
13:45 JM_ joined #gluster
13:46 unicky Poor (
13:51 hagarth joined #gluster
13:53 Manikandan joined #gluster
13:54 shyam joined #gluster
13:54 jwd joined #gluster
13:55 JM_ left #gluster
13:57 theron joined #gluster
13:58 squizzi joined #gluster
14:00 hagarth joined #gluster
14:01 haomaiwa_ joined #gluster
14:10 mbukatov joined #gluster
14:11 jdossey joined #gluster
14:11 beeradb joined #gluster
14:13 ayma joined #gluster
14:20 coredump joined #gluster
14:20 Raide joined #gluster
14:21 jonb joined #gluster
14:23 ayma joined #gluster
14:32 jonb Hello, can anyone tell me if it is possible to upgrade from the community version of Gluster to the RedHat version of Gluster while preserving the data in the volume?
14:36 ildefonso jonb, I really can't see a reason why not.
14:39 maserati joined #gluster
14:40 kotreshhr joined #gluster
14:42 togdon joined #gluster
14:44 jonb Thanks, That is what I thought,it's just looking through RedHat's documentation they don't have that upgrade path explicitly called out so it was cause for concern.
14:44 bowhunter joined #gluster
14:47 armyriad joined #gluster
14:51 togdon_ joined #gluster
14:54 spcmastertim joined #gluster
14:55 haomaiwa_ joined #gluster
14:57 theron joined #gluster
15:01 haomaiwa_ joined #gluster
15:04 dusmant joined #gluster
15:06 JoeJulian @later tell dpetrov "gluster peer status" only shows the *other* peers. The gluster CLI only works on a server that's part of the trusted peer group. Think of "server" and "client" as separate objects that can (not must) exist on the same computer.
15:06 glusterbot JoeJulian: The operation succeeded.
15:06 ccoffey joined #gluster
15:07 cholcombe joined #gluster
15:08 dgbaley Ah, that's a cool trick.
15:08 kayn joined #gluster
15:08 dgbaley Although he was having a legit issue
15:09 JoeJulian maybe
15:09 JoeJulian I started to wonder if he just had glusterd running on a machine that wasn't part of the peer group.
15:09 ccoffey are there any strategies for not having glustershd DOS the system when re-adding a downed node? I have it a go earlier at load avg went to 80 from a base of 0.3. I had background self heal count set to 1 and tried to ionice the glustershd process, but it didn't have much affect.
15:10 JoeJulian ccoffey: If that's denying service, you've got other problems.
15:10 JoeJulian ccoffey: what version?
15:12 ccoffey @joeJulian: 3.6.2-2
15:12 shyam joined #gluster
15:12 theron_ joined #gluster
15:13 JoeJulian Ah, ok. There was a bug. Upgrade.
15:13 * JoeJulian sheepishly retracts his pre-coffee snark.
15:14 ccoffey @JoeJulian, at, that's promising. I must read the change log
15:15 ccoffey @JoeJulian, do you know off hand at what version the bug was fixed?
15:16 JoeJulian I thought it was 3.6.4->3.6.5, but it might have been in 3.6.4.
15:19 ccoffey @JoeJulian: Just dbl checking, I see I have operating-version=30600, which may be wrong. I'll look at going to the latest 3.6 and try again next week. Thanks for the reassurance anyway
15:24 dgbaley Can I get a critique of my setup? I have 3 nodes with 9 disks each. I chose to do a brick/disk (no raid). So I have 36 bricks * 5 volumes = 180 glusterfsds. I'm starting to see more and more uses for new volumes and feel this is getting out of hand. Do people tend to RAID their disks first?
15:25 dgbaley Further, I use triple replication. So I thought it might make more sense to drop that to 2 if I used raid6 per host (or zfs raid6 perhaps).
15:26 CyrilPeponnet @JoeJulian: Gluster episode of the day, looks like he get some rest because load average and cpu usage drop down http://i.imgur.com/kFMO6lg.png I really don't know why but this is a fact
15:26 dgbaley Since the topology of each volume is the same, I could see switching to a giant unified hierarchy in a single volume, but I'm hesitant because of the security implications and the apparent inability to mount sub-directories.
15:28 CyrilPeponnet @JoeJulian I may have hint. By setting a too hight cache-read, basically no clients where able to user client-side cache any more so all requests goes to gluster. This could explain the spike the day I set the cache-read to 8GB thinking it was server side. Yesterday I set it to 400MB with max file size 5MB and timeout to 30s. Maybe it helps during the night.
15:30 JoeJulian dgbaley: Depends on use case, as always. One thing I did at $oldjob was use lvm partitions on each disk, where the lv was a brick for a specific volume. Made it so I could resize volumes on the fly without the existing storage.
15:30 CyrilPeponnet (http://i.imgur.com/OmHzI8F.png for the CPU usage this night)
15:30 JoeJulian CyrilPeponnet: that makes a lot of sense.
15:31 CyrilPeponnet yep
15:31 CyrilPeponnet Feeling better right now :)
15:31 JoeJulian dgbaley: The only issue at that point was resizing caches smaller so all the many servers would fit in my limited memory footprint.
15:32 JoeJulian CyrilPeponnet: You have the makings of a really good blog post.
15:32 CyrilPeponnet \o/
15:33 CyrilPeponnet please help your self :) long time since you have not written a gluster related blog post
15:33 JoeJulian Yeah, I've been having trouble finding the time in a day to do anything for myself.
15:34 CyrilPeponnet @JoeJulian but something is like weird because maybe 2h after setting the cache-size to 8GB I reset it seeing the cpu spike. The reset doesn't changed anything and I think yesterday by forcing a more reasonable value it finally get accepted and all clients starts caching again
15:34 CyrilPeponnet @JoeJulian Same for me
15:37 atalur joined #gluster
15:37 CyrilPeponnet load average: 1.96, 3.46, 4.19 is wayyyyy better than load average: 41.97, 29.01, 26.25
15:40 stickyboy joined #gluster
15:43 theron joined #gluster
16:01 7GHABDBJI joined #gluster
16:01 CyrilPeponnet @JoeJulian you told one day about fb I think which is using glusterfs, and they were preloading their cache before putting new server in production
16:02 fubada joined #gluster
16:02 CyrilPeponnet Do you have more information about it ? like what is the IO profile they have, their setup, the options... I'd like to compare this with our infra here
16:02 dlambrig_ joined #gluster
16:03 fubada Hi. Can someone suggest how I could "ingest" a volume from one cluster into another separate cluster without joining the two into a single
16:03 fubada something that will keep in sync until im ready to terminate the source cluster
16:03 CyrilPeponnet geo-rep ?
16:04 fubada thanks CyrilPeponnet, i was reading into geo rep and got lost in the docs, do you have something straightforward
16:04 CyrilPeponnet https://gluster.readthedocs.org/en/latest/Administrator%20Guide/Geo%20Replication/
16:04 glusterbot Title: Geo Replication - Gluster Docs (at gluster.readthedocs.org)
16:04 fubada thank you
16:04 CyrilPeponnet quite easy in fact
16:06 CyrilPeponnet https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.0/html/Administration_Guide/chap-User_Guide-Geo_Rep-Starting-Configure.html
16:06 glusterbot Title: 11.3.4. Configuring Geo-replication (at access.redhat.com)
16:07 fubada thanks again ;)
16:07 kanagaraj joined #gluster
16:07 CyrilPeponnet sorry wrong link
16:07 fubada which is preferred?
16:07 CyrilPeponnet this one
16:07 CyrilPeponnet https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/chap-Managing_Geo-replication.html
16:07 glusterbot Title: Chapter 12. Managing Geo-replication (at access.redhat.com)
16:08 JoeJulian CyrilPeponnet: Actually, fb's use and their preloading caches are separate things. The way they use storage for content is much more complicated than using a filesystem.
16:09 CyrilPeponnet don't  pay attention about mountbroker @fubada You can start geo-rep with 4 or 5 cmds
16:09 CyrilPeponnet @JoeJulian is there some white paper about it ?
16:10 fubada CyrilPeponnet: when setting up passwordless ssh logins for georep, are you using a dedicated user or root?
16:10 CyrilPeponnet root
16:10 CyrilPeponnet :p
16:11 JoeJulian They way they use gluster is as a service for other "customers" within FB to consume. They do it via nfs for various reasons. I've been to two different presentations, one at their office here in Seattle, the other by Richard Wareing at the gluster summit in Barcelona. Unfortunately, there's no link to his slides on the summit page so he must not have made them available.
16:12 CyrilPeponnet ok nevermind :)
16:13 akay joined #gluster
16:15 maveric_amitc_ joined #gluster
16:23 fubada CyrilPeponnet: is it really just the 12.3.4. section of https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/sect-Preparing_to_Deploy_Geo-replication.html
16:23 glusterbot Title: 12.3. Preparing to Deploy Geo-replication (at access.redhat.com)
16:23 fubada ?
16:24 fubada er 12.3.4.1.
16:25 CyrilPeponnet fubada yep
16:26 fubada thanks
16:26 CyrilPeponnet ssh-copy-id / ntp then gluster system:: execute gsec_create then gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL create push-pem and gluster volume geo-replication MASTER_VOL SLAVE_HOST::SLAVE_VOL start
16:28 Ray_Tracer joined #gluster
16:29 CyrilPeponnet laster on you can promote the slave https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/sect-Disaster_Recovery.html#Promoting_a_Slave_to_Master
16:29 glusterbot Title: 12.6. Disaster Recovery (at access.redhat.com)
16:31 ivan_rossi joined #gluster
16:34 ivan_rossi Hello #gluster. A client of mine is experiencing random crashes of glusterd after upgrading from 3.6.x to 3.7.4 on CentOS 6.7
16:35 ivan_rossi the problem looks like a memory issue:
16:35 ivan_rossi Crash Dump From One of the Incident:
16:35 ivan_rossi ==============================
16:35 ivan_rossi patchset: git://git.gluster.com/glusterfs.git
16:35 ivan_rossi signal received: 6
16:35 ivan_rossi time of crash:
16:36 ivan_rossi joined #gluster
16:37 JoeJulian That's not the complete crash (and please use a pastebin like fpaste.org for more than 3 lines)
16:38 ivan_rossi sorryy julian. inexperienced.
16:39 JoeJulian No worries.
16:40 Leildin is : "[2015-10-08 16:31:11.708043] W [MSGID: 106217] [glusterd-op-sm.c:4548:glusterd_op_modify_op_ctx] 0-management: Failed uuid to hostname conversion"
16:40 Leildin anything to be worried about ?
16:40 Leildin it gets followed by [2015-10-08 16:31:11.708082] W [MSGID: 106387] [glusterd-op-sm.c:4644:glusterd_op_modify_op_ctx] 0-management: op_ctx modification failed
16:40 Leildin over and over
16:41 JoeJulian Leildin: It looks like one of your peers hostnames isn't resolving from that server.
16:41 fubada CyrilPeponnet: will using non-root for georep complicate my efforts or is it the same?
16:42 Leildin single peer setup :D
16:42 ivan_rossi let's retry:  A client of mine is experiencing random crashes of glusterd after upgrading from 3.6.x to 3.7.4 on CentOS 6.7 crash dump at http://ur1.ca/nyl3p. Is it a known issue, are we are doing something stupid or else?
16:42 glusterbot Title: ur1 Generator (at ur1.ca)
16:42 CyrilPeponnet I don't know I've doing this as root
16:42 CyrilPeponnet never tried with unprivileged user
16:42 hagarth ivan_rossi: could you send across more details about volume configuration & log files on gluster-users?
16:43 JoeJulian ivan_rossi: Odd. There are no abort() calls in the gluster source. There is in contrib/qemu but that's not part of glusterd, so I'm at a loss.
16:43 hagarth JoeJulian: the abort() seems to be triggered from within libc
16:43 rafi joined #gluster
16:45 hagarth ivan_rossi, JoeJulian: does seem like a double free of iobref upon a network disconnection
16:47 dlambrig_ joined #gluster
16:54 CyrilPeponnet @JoeJulian why the cluster.read-hash-mode default value is not set to 2 if there is *at least* to replica ?
16:55 CyrilPeponnet because it means that with a replica all io operation are done on the same brick
16:55 CyrilPeponnet (by default)
16:57 JoeJulian CyrilPeponnet: Not sure. That was the prior behavior before that feature was added, so perhaps it's to avoid changing the behavior without an admin knowing about it. Perhaps it overrides the client reading from the local server if the client is on the server.
16:57 CyrilPeponnet Ok
16:57 jiffin joined #gluster
17:00 tsaavik left #gluster
17:01 haomaiwa_ joined #gluster
17:05 frozengeek joined #gluster
17:05 ivan_rossi @hagarth: some stuff is already on gluster-users. look for mail from muhammad.aliabbas yesterday. volume info missig. I added them now : http://ur1.ca/nylad
17:06 JoeJulian ivan_rossi: Also file a bug report with the crash log
17:06 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
17:16 hagarth ivan_rossi: just responded requesting for the log file of the process that crashed
17:18 dlambrig_ joined #gluster
17:21 DV joined #gluster
17:30 cliluw joined #gluster
17:31 gem joined #gluster
17:35 frozengeek joined #gluster
17:56 fubada CyrilPeponnet: I wasnt able to do the metadata brick step
17:56 fubada Must be my gluster version (3.6.1)
17:56 fubada Invalid option
17:56 fubada geo-replication command failed
17:57 CyrilPeponnet don't need it I think
17:57 CyrilPeponnet never play with this metadata thingy
17:57 fubada and I was able to start georep but status is "faulty"
17:57 CyrilPeponnet check the logs
17:57 CyrilPeponnet the reason of the fault will be there
18:00 fubada CyrilPeponnet: heres where i am now https://gist.github.com/aamerik/af0cbc3b883c3f1276f3
18:00 glusterbot Title: gist:af0cbc3b883c3f1276f3 · GitHub (at gist.github.com)
18:00 CyrilPeponnet check the logs
18:03 fubada CyrilPeponnet: im getting ssh perm denied in the logs, but I can ssh without a password just fine
18:03 fubada as root
18:04 fubada Im confused why its trying to use /var/lib/glusterd/geo-replication/secret.pem for ssh and not the generated ssh key in /root/.ssh
18:06 CyrilPeponnet AFAICR the gsec thingy will deploy the key everywhere.
18:07 togdon joined #gluster
18:07 CyrilPeponnet but the last time I tried I hit a bug but this was with 3.5
18:07 fubada weird I ran all those commands and the only issue was the metadata one
18:07 CyrilPeponnet I'll need to setup geo-rep here
18:07 CyrilPeponnet so I will try
18:07 fubada thank you!
18:07 CyrilPeponnet Finishing some tasks before
18:07 CyrilPeponnet I'll then report back
18:11 fubada CyrilPeponnet: do i need the ssh key on every master host?
18:11 fubada or just one of the hosts in my source cluster
18:11 semiautomatic joined #gluster
18:11 CyrilPeponnet it should replicate it self
18:11 CyrilPeponnet with the gsec cmds
18:11 fubada I mean ssh key
18:20 fubada CyrilPeponnet: how can I start georepl over
18:20 fubada its in a weird state, I cant issue any command
18:20 fubada Another transaction is in progress for puppetca. Please try again after sometime.
18:20 fubada geo-replication command failed
18:35 togdon joined #gluster
18:35 royadav joined #gluster
18:37 royadav All: I set up the gluster volume with 2 replicas on two servers but replication is not working
18:37 royadav All: How to debug this?
18:37 CyrilPeponnet gluster vol info yourvol
18:37 CyrilPeponnet why you said it's not working
18:38 royadav CyrilPeponnet: Gluster volume info shows status: STarted
18:38 fubada CyrilPeponnet: i just repeated the process, now Status is Passive from one master and Faulty from the other
18:38 fubada how can I check wtf
18:38 CyrilPeponnet @royadav output
18:38 CyrilPeponnet @fubada works for me here just setup geo-rep but as root
18:38 royadav CyrilPeponnet: output?
18:39 royadav CyrilPeponnet: sorry, I am new to this so dunno much
18:39 CyrilPeponnet @gluster vol info vol
18:39 royadav CyrilPeponnet: yes, it creates volume and shows type replicate with right brick address and transport type tcp
18:40 CyrilPeponnet why do you say it's not working ?
18:40 royadav CyrilPeponnet: status is ready and performance.readdir-ahead: on
18:40 royadav CyrilPeponnet: when I create a file on one server in the brick.. it should replicate?
18:40 bwellsnc joined #gluster
18:40 royadav CyrilPeponnet: but it does not
18:40 CyrilPeponnet NO
18:40 CyrilPeponnet never touch the brick
18:40 CyrilPeponnet create file on mount point
18:41 royadav CyrilPeponnet: I see..
18:41 royadav CyrilPeponnet: thanks!
18:41 bwellsnc Hey guys, I have an interesting question.  I am wanting to use gluster to mount one iscsi block device to multiple servers
18:41 fubada CyrilPeponnet: in logs, bash: /nonexistent/gsyncd: No such file or directory
18:41 fubada ?
18:41 bwellsnc Is that possible
18:41 royadav CyrilPeponnet: but one more problem when I try to mount the brick it complains that glusterfs filesystem not found
18:41 royadav CyrilPeponnet: how do I resolve that?
18:41 CyrilPeponnet @fubada yeah I've seen something like this, try to google it there is things in the mailing list
18:41 CyrilPeponnet install glusterfs-fuse
18:42 CyrilPeponnet or mount using nfs
18:42 royadav CyrilPeponnet: I am actually on NixOS, so installed it from source.. is there a separate glusterfs-fuse module?
18:42 royadav CyrilPeponnet: or does it come with the main tar?
18:43 CyrilPeponnet don't know
18:43 CyrilPeponnet but this is a separate package
18:43 CyrilPeponnet not a kerne module
18:43 CyrilPeponnet (even if it depends on fuse)
18:43 CyrilPeponnet use nfs then
18:44 royadav CyrilPeponnet: Is there a performance hit for using N
18:44 royadav CyrilPeponnet: NFS*
18:44 CyrilPeponnet it depends on you IO profile
18:44 CyrilPeponnet (usage)
18:45 royadav CyrilPeponnet: I see.. thanks! let me try that and see if it works..
18:45 royadav CyrilPeponnet: thanks for the help
18:45 royadav CyrilPeponnet: I going bananas
18:45 royadav CyrilPeponnet: seeing that its not working
18:46 royadav CyrilPeponnet: I guess that was the reason that you should not touch the original brick volume
18:46 royadav CyrilPeponnet: rather mount it and create files
18:46 CyrilPeponnet yep
18:46 royadav CyrilPeponnet: thanks!!
18:46 CyrilPeponnet the only time you'll need to touch brick is to resolve split-brain situation (and I think you can now use commands to do that instead of digging into the brick)
18:47 royadav CyrilPeponnet: How do you debug the issues?
18:47 royadav CyrilPeponnet: I mean you do something wrong.. but what are the debug options other than cryptic log (alteast for novice like me)
18:48 CyrilPeponnet logs
18:48 CyrilPeponnet :)
18:48 CyrilPeponnet @JoeJulian any skills in geo-rep ?
18:48 royadav CyrilPeponnet: That is the only options? I see multiple logs which is for which?
18:49 CyrilPeponnet read the doc :)
18:49 CyrilPeponnet there is cli log, brick log, volume logs, mount logs, geo-rep logs....
18:50 royadav CyrilPeponnet: Cool.. I will do that.. is there is nice gui feature where you can manage the volumes or everything is still command line?
18:51 royadav CyrilPeponnet: though I prefer command line but sometime like to have that for monitoring
18:52 ParsectiX joined #gluster
18:53 fubada CyrilPeponnet: Im looking at the logs from the one failing master, https://gist.github.com/aamerik/a3df1a4b54109deb405a
18:53 glusterbot Title: gist:a3df1a4b54109deb405a · GitHub (at gist.github.com)
18:53 fubada Do you have a sec to help
18:53 ParsectiX joined #gluster
18:54 ParsectiX joined #gluster
18:58 dlambrig_ joined #gluster
19:01 fubada I dont understand why master1 needs to ssh using the PEM and master2 doesnt try to ssh at all when starting geo repl
19:05 CyrilPeponnet what version are you using
19:05 CyrilPeponnet 127.0.0.1:24007 failed (Connection refused)
19:06 fubada 2.6.1
19:06 CyrilPeponnet do you have a firewall
19:06 fubada 3.6.1
19:06 fubada nope, iptables blank
19:07 fubada so, the current status is, one master is Passing, the other is Faulty.  The one thats Faulty is where I ran all the setup commands
19:07 fubada strange
19:07 CyrilPeponnet https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/sect-Troubleshooting_Geo-replication.html
19:07 glusterbot Title: 12.10. Troubleshooting Geo-replication (at access.redhat.com)
19:08 CyrilPeponnet no clue for now
19:13 ParsectiX joined #gluster
19:17 ParsectiX joined #gluster
19:19 ParsectiX joined #gluster
19:19 fubada CyrilPeponnet: general question, do the slaves accept writes?
19:20 shaunm joined #gluster
19:22 ParsectiX joined #gluster
19:22 kovshenin joined #gluster
19:25 fubada CyrilPeponnet: whats the procedure for adding other volumes to geo rep? I try to repeat the steps (changing the volume name) but they are failing
19:26 fubada working :)
19:27 semiautomatic1 joined #gluster
19:39 dlambrig_ left #gluster
19:40 jwd joined #gluster
19:42 jwaibel joined #gluster
19:43 kovshenin joined #gluster
19:50 dlambrig_ joined #gluster
19:52 kovshenin joined #gluster
19:52 bwellsnc Hey guys, not sure if I can do this with gluster.  I have an iscsi lun on my NetApp that I was to share with multiple servers.  I know iscsi I cannot mount to multiple servers at the same time.  Can I mount the iscsi lun and use gluster to share that with multiple servers that way.  Thanks!
19:57 theron_ joined #gluster
19:57 semiautomatic joined #gluster
19:58 cristian joined #gluster
20:00 ParsectiX joined #gluster
20:01 swebb joined #gluster
20:02 ParsectiX joined #gluster
20:04 rowhit joined #gluster
20:05 rowhit CyrilPeponnet: It worked like a charm.. it replicates across the server if files are created from the mount point...
20:09 bwells_ joined #gluster
20:13 ayma joined #gluster
20:14 kovshenin joined #gluster
20:17 CyrilPeponnet any geo-rep guru around ?
20:18 CyrilPeponnet Setting geo-rep between two volumes, it starts as change-log directly instead of hybrid crawl (at least it was like this in 3.5 maybe it changes). Is that the default behavior ?
20:18 CyrilPeponnet New files are geo-reped but not existing ones
20:23 togdon joined #gluster
20:26 fubada CyrilPeponnet: one of my volumes, I cannot get one master to get a succesful status
20:26 fubada it says Config Corrupted
20:27 fubada any clue what I can do to fix? It did not appear to generate a gsyncf.conf
20:27 fubada gott say geo-repl is very unpredictable in 3.6.1
20:32 CyrilPeponnet No clue, but works fini in 3.6.5
20:40 Ru57y joined #gluster
20:48 bluenemo joined #gluster
20:55 CyrilPeponnet @JoeJulian around ?
20:55 bennyturns joined #gluster
21:15 CyrilPeponnet anyway I had to reset the trusted.glusterfs.xx.xx.stime attr in order to retriever a crawl....
21:15 adama joined #gluster
21:16 adamaN joined #gluster
21:18 adamaN Hi, i am getting an error when i try to mount gluster:
21:18 adamaN root@g-7:~# mount -a
21:18 adamaN unknown option _netdev (ignored)
21:18 adamaN why is that?
21:18 side_control joined #gluster
21:20 CyrilPeponnet adamaN version ?
21:20 CyrilPeponnet _netdev option is a directive in fstab to make mount done after the network device is up and configured
21:20 CyrilPeponnet works fine here on centos
21:22 adamaN glusterfs 3.3.1
21:22 CyrilPeponnet oh please update :)
21:22 CyrilPeponnet which distort ?
21:22 CyrilPeponnet distro
21:22 adamaN glusterfs 3.3.1 built on Apr  2 2013 15:09:48
21:22 adamaN Repository revision: git://git.gluster.com/glusterfs.git
21:22 adamaN Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com>
21:23 glusterbot Title: Technologies | Red Hat (at www.gluster.com)
21:23 CyrilPeponnet On which OS
21:23 CyrilPeponnet (you are using a ~5 year old release of gluster should really upgrade it)
21:23 papamoose joined #gluster
21:24 adamaN working on it. this pb appeared today. did not change a thing.
21:24 CyrilPeponnet did you update your system
21:25 CyrilPeponnet _netdev is part of fstab
21:25 adamaN no update
21:26 CyrilPeponnet it's a warning that you can ignore so..
21:27 adamaN it even hangs when i try df -h
21:27 togdon joined #gluster
21:29 adamaN also : mountall: Plymouth command failed
21:40 stickyboy joined #gluster
22:02 bennyturns joined #gluster
22:03 Guest67407 joined #gluster
22:06 skylar joined #gluster
22:07 DV joined #gluster
22:13 mjrosenb joined #gluster
22:14 plarsen joined #gluster
22:28 badone joined #gluster
22:50 skylar joined #gluster
23:54 badone joined #gluster
23:57 EinstCrazy joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary