Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-12-18

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 hgichon joined #gluster
00:08 bowhunter joined #gluster
00:16 gildub joined #gluster
00:32 zhangjn joined #gluster
00:42 dlambrig joined #gluster
00:55 DV joined #gluster
00:59 calavera joined #gluster
01:02 EinstCrazy joined #gluster
01:18 zhangjn joined #gluster
01:29 Lee1092 joined #gluster
01:56 haomaiwang joined #gluster
02:03 haomaiwa_ joined #gluster
02:18 DV joined #gluster
02:24 ahino joined #gluster
02:30 harish_ joined #gluster
02:32 nangthang joined #gluster
02:42 kotreshhr joined #gluster
02:45 kotreshhr left #gluster
02:47 ilbot3 joined #gluster
02:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:55 zhangjn joined #gluster
02:57 overclk joined #gluster
03:00 mlncn joined #gluster
03:01 haomaiwa_ joined #gluster
03:18 DV__ joined #gluster
03:35 atinm joined #gluster
03:40 kanagaraj joined #gluster
03:41 Peppaq joined #gluster
03:42 ahino joined #gluster
03:47 kotreshhr joined #gluster
03:54 shubhendu joined #gluster
03:56 ahino joined #gluster
04:00 RameshN joined #gluster
04:01 haomaiwa_ joined #gluster
04:02 atinm joined #gluster
04:12 atinm joined #gluster
04:13 hagarth joined #gluster
04:14 ju5t joined #gluster
04:15 itisravi joined #gluster
04:16 kotreshhr joined #gluster
04:21 lpabon joined #gluster
04:25 nehar joined #gluster
04:26 nbalacha joined #gluster
04:28 zhangjn joined #gluster
04:29 danieli joined #gluster
04:31 Park joined #gluster
04:36 Humble joined #gluster
04:37 RameshN joined #gluster
04:37 kdhananjay joined #gluster
04:45 kshlm joined #gluster
04:46 amye joined #gluster
04:50 poornimag joined #gluster
04:57 jiffin joined #gluster
04:58 ndarshan joined #gluster
04:59 pppp joined #gluster
04:59 arcolife joined #gluster
05:01 haomaiwa_ joined #gluster
05:06 vmallika joined #gluster
05:16 Manikandan joined #gluster
05:18 kotreshhr joined #gluster
05:20 calavera joined #gluster
05:31 deepakcs joined #gluster
05:31 Apeksha joined #gluster
05:33 aravindavk joined #gluster
05:34 overclk joined #gluster
05:34 kshlm joined #gluster
05:36 ppai joined #gluster
05:49 atalur joined #gluster
05:51 Manikandan joined #gluster
05:53 atalur joined #gluster
05:54 nishanth joined #gluster
05:56 21WAAJQB3 joined #gluster
05:56 hgowtham joined #gluster
06:01 haomaiwa_ joined #gluster
06:02 skoduri joined #gluster
06:02 ashiq joined #gluster
06:02 zeittunnel joined #gluster
06:06 pppp joined #gluster
06:08 ramky joined #gluster
06:09 klaxa joined #gluster
06:09 Bhaskarakiran joined #gluster
06:15 amye joined #gluster
06:16 calavera joined #gluster
06:17 rafi joined #gluster
06:17 hgowtham joined #gluster
06:21 Manikandan joined #gluster
06:23 dusmant joined #gluster
06:25 shubhendu joined #gluster
06:25 nishanth joined #gluster
06:30 spalai joined #gluster
06:30 spalai left #gluster
06:31 spalai joined #gluster
06:35 julim joined #gluster
06:36 haomaiwang joined #gluster
06:42 DV joined #gluster
06:44 amye joined #gluster
06:45 nbalacha joined #gluster
06:51 shubhendu joined #gluster
06:51 nishanth joined #gluster
06:52 anil joined #gluster
06:53 kaarebs joined #gluster
06:53 ChrisNBlum joined #gluster
07:01 haomaiwa_ joined #gluster
07:08 julim joined #gluster
07:11 Apeksha joined #gluster
07:11 Bhaskarakiran joined #gluster
07:17 jtux joined #gluster
07:18 Manikandan joined #gluster
07:19 mhulsman joined #gluster
07:20 ctria joined #gluster
07:26 gildub joined #gluster
07:37 ppai joined #gluster
07:38 uebera|| joined #gluster
07:38 uebera|| joined #gluster
07:43 mobaer joined #gluster
08:01 haomaiwa_ joined #gluster
08:05 bhuddah joined #gluster
08:05 ju5t joined #gluster
08:05 ppai joined #gluster
08:12 [Enrico] joined #gluster
08:13 amye joined #gluster
08:17 ivan_rossi joined #gluster
08:29 hagarth joined #gluster
08:31 hagarth joined #gluster
08:34 SOLDIERz joined #gluster
08:34 kaarebs joined #gluster
08:37 ppai joined #gluster
08:39 fsimonce joined #gluster
08:40 uebera|| joined #gluster
08:40 uebera|| joined #gluster
08:48 spalai joined #gluster
08:50 ashiq joined #gluster
08:58 * rjoseph in GlusterFS meetup
09:01 kotreshhr joined #gluster
09:01 haomaiwa_ joined #gluster
09:01 alghost_ joined #gluster
09:03 ctria joined #gluster
09:05 rafi joined #gluster
09:05 Bhaskarakiran_ joined #gluster
09:07 poornimag joined #gluster
09:07 atalur joined #gluster
09:09 skoduri joined #gluster
09:09 aravindavk joined #gluster
09:10 atinm joined #gluster
09:10 jiffin joined #gluster
09:13 rafi joined #gluster
09:13 rastar joined #gluster
09:14 Manikandan joined #gluster
09:17 anil joined #gluster
09:26 amye joined #gluster
09:27 Humble joined #gluster
09:31 Bardack hum
09:31 Bardack since we migrated from gluster 3.5 to 3.7, and since we now use ganesha, we have weird things from windows client and NFS access
09:32 Bardack we can access the NFS share without an issue, but we cannot write to it anymore
09:32 Bardack well, it's more complex
09:32 Bardack we can create directory, and empty file, but we are unable to write data on those files
09:32 Bardack anybody had this issue ?
09:33 RedW joined #gluster
09:33 ndevos Bardack: I've not heard of that before, I think we need a tcpdump to understand why that happens
09:33 jiffin Bardack: can check ganesha.log and ganesha-gfapi.log in /var/log/ on the node which ganesha server is running
09:33 Bardack https://www.gluster.org/pipermail/​gluster-users/2015-May/022045.html
09:33 glusterbot Title: [Gluster-users] [Gluster-devel] glusterfs+nfs-ganesha+windows2008 issues (at www.gluster.org)
09:33 Bardack just found this, which might be related
09:35 ndevos Bardack: might be, do you get segfaults of ganesha too?
09:35 Bardack just trying the action on the client now and will check the log
09:35 ndevos Bardack: but definitely file a bug with all the details about the versions, configuration and how to reproduce
09:35 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
09:36 rafi joined #gluster
09:36 dusmant joined #gluster
09:36 ndevos Apeksha: ^ might be of interest for you too?
09:37 Bardack it's quite funny ,create a directory, naming it, fails and says not able to write, BUT it creates the folder NewFolder(2)
09:37 Apeksha ndevos, looking
09:38 ndevos Bardack: I guess the Windows NFS-client does multiple function calls to create the dir, and some can (optionally) fail
09:39 ndevos Bardack: it would really help if you can attach a tcpdump captured on the NFS-server when you perform those steps
09:39 Bardack http://pastebin.com/2mEp7WjJ
09:39 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
09:39 ndevos Bardack: capture it like: tcpdump -s 0 -i any -w /var/tmp/win-nfs-client.pcap tcp and port 2049
09:40 ndevos Bardack: when its written to a file, we can use Wireshark to inspect the contents of the NFS traffic
09:40 Apeksha ndevos, ve not come across this issue
09:41 Bardack ok i ll run a tcpdum right now
09:41 ndevos Apeksha: oh, ok, when Bardack has the details in a bug report, we can then compare versions and config and all
09:41 Apeksha ndevos, sure
09:41 ndevos thanks Apeksha!
09:41 Apeksha ndevos, :)
09:43 ndevos Bardack: I'm stepping out for a bit, will be back in ~2 hours or so
09:43 Bardack and i ll be gone :)
09:43 Bardack so i ll provide as most info as i can
09:43 ndevos Bardack: if jiffin or skoduri have time, they might be able to help a little too
09:44 ndevos Bardack: when you filed the bug, post the link here so I'll find it easily :)
09:44 Bardack mkay
09:45 jiffin Bardack: JoeJulian have experience in terms of windows nfs clients
09:45 jiffin Bardack: Please file a bug for issue with logs, tcdump and steps to reproduce in the same
09:45 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
09:50 atinm joined #gluster
09:52 skoduri Bardack, along with pkt trace, please provide entire nfs-ganesha.log and ganesha-gfapi.log files
09:53 skoduri Bardack, for now, could you grep for "/1900/190001/19000101/9" in the ganesha-gfapi.log file and paste the output?
09:53 Bardack ok i will
09:53 Bardack http://fpaste.org/302660/04324241/   please find the tcpdump when I create a directory
09:53 glusterbot Title: #302660 Fedora Project Pastebin (at fpaste.org)
09:54 nangthang joined #gluster
09:55 Bardack and the grep: http://fpaste.org/302661/04325271/
09:55 glusterbot Title: #302661 Fedora Project Pastebin (at fpaste.org)
09:57 arcolife joined #gluster
09:58 rafi joined #gluster
10:00 deniszh joined #gluster
10:01 hgowtham joined #gluster
10:01 haomaiwa_ joined #gluster
10:04 deniszh1 joined #gluster
10:13 Bardack skoduri , jiffin , ndevos : https://bugzilla.redhat.co​m/show_bug.cgi?id=1292771
10:13 glusterbot Bug 1292771: high, unspecified, ---, bugs, NEW , NFS Ganesha write access issues with Win7 Enterprise
10:15 Bardack it was a worm disk, i disabled it, same issue
10:16 Bardack ohhh no
10:16 Bardack it works
10:16 DV joined #gluster
10:17 jwd joined #gluster
10:18 Bardack tiens, it could maybe just be related to perm changes ...
10:19 Bardack it seems to be related to the worm disk
10:19 Bardack when enabled it doesnt work, when disabled it works
10:19 atinm joined #gluster
10:20 Bardack ok so confirmed, it's worm related with ganesha
10:35 harish_ joined #gluster
10:37 ju5t hi, we're doing a fix-layout on 4 bricks, 2 of them have finished in little over an hour, the others are running for over 20 hours now, do you any ideas on why that's happening?
10:41 spalai joined #gluster
10:41 EinstCrazy joined #gluster
10:42 Bhaskarakiran_ joined #gluster
10:51 dusmant joined #gluster
10:58 amye joined #gluster
11:01 haomaiwa_ joined #gluster
11:01 spalai joined #gluster
11:01 poornimag joined #gluster
11:02 mvatanen joined #gluster
11:04 skoduri joined #gluster
11:04 atinm joined #gluster
11:04 cyberbootje joined #gluster
11:05 mvatanen hi, any schedule for fixing glusterfs repofiles? gpgkey path returns 404: http://download.gluster.org/pub/gluster/glus​terfs/3.5/LATEST/CentOS/glusterfs-epel.repo
11:10 badone joined #gluster
11:19 zeittunnel joined #gluster
11:23 kotreshhr joined #gluster
11:25 aravindavk joined #gluster
11:25 kotreshhr1 joined #gluster
11:28 jiffin1 joined #gluster
11:30 skoduri Bardack ..saw your bug ..I guess this might have happened ..though yet to confirm
11:30 skoduri Bardack, IIUC, you have created a file /directory and then trying to rename it to a different name ?
11:33 jiffin skoduri: as far as i understand when he enables worm on the volume and try to perform create,mkdir it fails on windows clients
11:34 jiffin but works for linux nfs clients
11:34 jiffin Bardack: can u confirm it ??
11:34 skoduri jiffin, yes its with WORM feature..
11:35 skoduri the way I see test case is that..a directory/file being created with default name and being renamed to a different name
11:39 Manikandan joined #gluster
11:46 ivan_rossi left #gluster
11:59 ju5t joined #gluster
12:00 overclk joined #gluster
12:01 mvatanen left #gluster
12:05 DV joined #gluster
12:06 ndevos csim, kkeithley: can one of you fix the 3.5/LATEST location on download.gluster.org, see mvatanen's message earlier here
12:07 pppp joined #gluster
12:08 mlncn joined #gluster
12:16 firemanxbr joined #gluster
12:17 julim joined #gluster
12:18 rafi joined #gluster
12:31 anil joined #gluster
12:31 kanagaraj joined #gluster
12:37 Pingu joined #gluster
12:38 Pingu hi, I'm having a problem with gluster mounted as NFS in a replica environment: if I pull the plug on one of the servers, the client freezes. I've no idea where to start
12:39 ndevos Pingu: well, do you mount using a virtal ip-address that gets re-located when a server goes down?
12:39 bhuddah do you mount hard and nointr?
12:40 ndevos the freeze itself is ok, and is expected, but it should continue after a while again
12:40 Pingu i don't mount with any options, just proto=tcp
12:41 Pingu I've waited, it doesn't continue
12:41 Pingu how long should I wait?
12:41 ndevos depends a little on the configuration, it could take up to a minute or so
12:41 ndevos do you mount by ip-address, or do you give a hostname?
12:42 Pingu I've waited longer than that. Client was writing on the mounted directory and I've pulled the plug on one server - to test.
12:42 Pingu I mount by IP
12:43 ndevos ok, and when the server goes down, the IP is still available?
12:43 skoduri joined #gluster
12:43 Pingu franky, I'd use fuse instead of nfs, but I get really poor performance with fuse (10MB/s)
12:43 Pingu what do you mean if the IP is available?
12:44 ndevos do you use something like pacemaker or ctdb to relocate/migrate the IP-address when the server goes down?
12:45 Pingu no, the IP is not realocated
12:45 ndevos ok, so if the NFS-client is talking to the Gluster server that goes down, it will not know about any other servers that it can try (NFS limitation)
12:46 ndevos you can use a virtual ip-address, that is assigned to one Gluster server, and that IP-address moves when the Gluster server goes down
12:46 ndevos all Gluster servers should have/keep their own IP-address, the virtual-ip is an additional one
12:47 ndevos it is what high-availability software like pacemaker and ctdb can do for you
12:47 Pingu and do I mount on client  virtual.ip:/brick
12:47 Pingu ?
12:48 ndevos yes, (virtual-ip:/volume) the NFS-client can then mount using the virtual-ip, which fails-over/migrates/relocates when needed
12:48 ndevos an NFS-client only can connect to one IP-address, the FUSE-client knows about all Gluster servers
12:48 Pingu I see. any performance overhead?
12:49 ndevos depends on the number of clients, if you have many, you may want to use multiple virtual-IPs and setup round-robin-DNS for it
12:49 Pingu I told you, I'd rather use fuse, but fuse = 10MB/s while nfs = 100MB/s
12:50 Pingu no, very few clients
12:50 Pingu 2, mostly 5
12:50 Pingu *at most*
12:50 ju5t it seems that /var/lib/glusterd/peers/ is overwritten every time a brick joins a cluster, where does it get this information from?
12:51 Humble joined #gluster
12:51 ndevos if you want to distribute the NFS-connections, you need to use more virtual-IPs, it really depends on your demands
12:52 Pingu OK, thanks ndevos. this has been very informative. Any ideas how to improve performance over fuse?
12:52 ndevos we offer a mostly automated way to setup pacemaker + NFS-Ganesha for high-availability-NFS, see http://gluster.readthedocs.org/en/​latest/Administrator%20Guide/NFS-G​anesha%20GlusterFS%20Intergration/ for details
12:52 glusterbot Title: Configuring NFS-Ganesha server - Gluster Docs (at gluster.readthedocs.org)
12:52 DV__ joined #gluster
12:53 ju5t because for some odd reason we have one peer that's identified by an ip address and the others by their hostname
12:53 ndevos Pingu: that really depends on the type of workload, the volume and all - NFS is faster for small files, because it caches more than FUSE
12:53 ndevos ~hostnames | ju5t
12:53 glusterbot ju5t: Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
12:54 Pingu So I've read, but the difference is 10fold. Is this normal?
12:54 atalur joined #gluster
12:54 ira joined #gluster
12:55 ju5t ndevos: thanks, didn't expect it to be that easy
12:56 ndevos ju5t: nice :)
12:57 mobaer joined #gluster
12:57 ndevos Pingu: it really depends, I'm pretty sure there are workloads where it is even more than 10x
12:58 ju5t if we get an unable to find friend error when we're using hostnames, though it can resolve the hostname to an ip, what could be causing that?
12:58 Pingu i'm just on the testing phase... dd if=/dev/zero of=file1 bs=1024 count=65536 got me about 10MB/sec, I was really expecting more
12:58 chirino joined #gluster
12:59 unclemarc joined #gluster
13:05 Pingu ndevos, you rock. Thanks a lot for the help!
13:08 ndevos Pingu: oh, well, dd is not really a good test, it will hardly compare to what you'll be doing in production
13:10 Pingu ndevos, I know, but it's the fastest way to test :)
13:11 ndevos heh, yeah, I understand
13:25 firemanxbr joined #gluster
13:27 d0nn1e joined #gluster
13:39 atinm joined #gluster
13:41 ira joined #gluster
13:46 haomaiwang joined #gluster
13:52 spalai left #gluster
13:55 B21956 joined #gluster
14:01 haomaiwa_ joined #gluster
14:32 hamiller joined #gluster
14:32 foster joined #gluster
14:34 refj joined #gluster
14:35 atinm joined #gluster
14:45 drue joined #gluster
14:46 drue what is it called when someone does an rsync into the actual brick directory on a gluster node? (so i can properly google how to fix it?)
14:48 ju5t joined #gluster
14:52 drue i'm thinking just remove-brick and add-brick.
14:55 arcolife joined #gluster
15:01 haomaiwa_ joined #gluster
15:01 firemanxbr joined #gluster
15:04 hagarth joined #gluster
15:05 lpabon joined #gluster
15:21 kdhananjay joined #gluster
15:24 DV__ joined #gluster
15:35 plarsen joined #gluster
15:36 kaarebs joined #gluster
15:53 kotreshhr1 left #gluster
16:01 haomaiwa_ joined #gluster
16:04 ju5t joined #gluster
16:05 aravindavk joined #gluster
16:08 bennyturns joined #gluster
16:09 bennyturns joined #gluster
16:09 firemanxbr joined #gluster
16:11 DV joined #gluster
16:14 fyxim joined #gluster
16:26 JoeJulian drue: There's no specific name for it. The way I would fix it (with a replicated volume) is to wipe it and let it re-replicate.
16:28 JoeJulian remove-brick simply removes a brick or bricks from your volume. It doesn't remove any data from those bricks.
16:34 skoduri joined #gluster
16:44 DV joined #gluster
16:48 DV__ joined #gluster
17:01 haomaiwa_ joined #gluster
17:15 monotek joined #gluster
17:33 plarsen joined #gluster
17:57 autostatic Stepping in for ju5t as we're working on the same problem: if we get an unable to find friend error when we're using hostnames, though it can resolve the hostname to an ip, what could be causing that?
18:01 haomaiwa_ joined #gluster
18:05 plarsen joined #gluster
18:10 ahino joined #gluster
18:13 Rapture joined #gluster
18:16 kotreshhr joined #gluster
18:23 plarsen joined #gluster
18:37 bfoster joined #gluster
18:41 mobaer joined #gluster
18:52 chirino joined #gluster
19:01 haomaiwa_ joined #gluster
19:17 squizzi_ joined #gluster
19:20 josh joined #gluster
19:31 kotreshhr left #gluster
19:34 squizzi joined #gluster
19:55 mlncn joined #gluster
20:01 haomaiwa_ joined #gluster
20:02 F2Knight joined #gluster
20:06 ctria joined #gluster
20:50 jbrooks joined #gluster
20:57 B21956 left #gluster
20:58 mlncn joined #gluster
21:01 7GHABS1TM joined #gluster
21:03 JoeJulian autostatic: The IP the hostname resolves to isn't correct, maybe?
21:04 mhulsman joined #gluster
21:48 mlncn joined #gluster
21:50 mhulsman joined #gluster
22:01 haomaiwa_ joined #gluster
22:22 DV joined #gluster
22:23 jrm16020 joined #gluster
22:25 mhulsman joined #gluster
22:42 B21956 joined #gluster
22:44 B21956 left #gluster
22:44 B21956 joined #gluster
22:44 B21956 joined #gluster
22:46 cliluw joined #gluster
22:50 muneerse2 joined #gluster
23:01 haomaiwa_ joined #gluster
23:41 arcolife joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary