Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-01-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:11 Sunghost strange if i mount it like env -i LC_NUMERIC="us_US.UTF-8" mount -t glusterfs -o transport=tcp clusternode01:vol2 /mnt/clustervol2
00:11 Sunghost it sometimes work but not each time i run this, it also works if i use de_DE, sometimes - BUG?
00:12 Sunghost but only from version 3.6.1 from 3.5.2-1 it ok to mount
00:12 Sunghost joejulian say something ;)
00:16 Pupeno joined #gluster
00:17 iPancrea_ joined #gluster
00:20 iPancrea_ joined #gluster
00:29 iPancreas joined #gluster
00:32 sputnik1_ joined #gluster
01:02 msmith_ joined #gluster
01:09 iPancreas joined #gluster
01:11 Pupeno joined #gluster
01:11 Pupeno joined #gluster
01:24 zerick joined #gluster
01:45 iPancreas joined #gluster
02:07 sputnik13 joined #gluster
02:08 haomaiwang joined #gluster
02:10 msmith_ joined #gluster
02:31 maveric_amitc_ joined #gluster
02:47 iPancreas joined #gluster
02:48 prg3 joined #gluster
03:13 maveric_amitc_ joined #gluster
03:24 sputnik13 joined #gluster
03:43 hagarth joined #gluster
03:48 iPancreas joined #gluster
03:52 elico joined #gluster
03:58 sputnik13 joined #gluster
04:31 sputnik13 joined #gluster
04:41 sputnik13 joined #gluster
04:48 iPancreas joined #gluster
04:52 hagarth1 joined #gluster
05:49 iPancreas joined #gluster
05:53 spandit joined #gluster
06:05 livelace joined #gluster
06:40 sage_ joined #gluster
06:45 sputnik13 joined #gluster
06:49 iPancreas joined #gluster
06:54 bala joined #gluster
06:55 nangthang joined #gluster
07:05 DV joined #gluster
07:19 kovshenin joined #gluster
07:50 iPancreas joined #gluster
08:24 sputnik13 joined #gluster
08:42 soumya joined #gluster
08:50 iPancreas joined #gluster
09:13 jamesc joined #gluster
09:14 nbalacha joined #gluster
09:14 jamesc how can I check what the gluster compile options were
09:18 sputnik13 joined #gluster
09:18 rotbeard joined #gluster
09:44 nbalacha joined #gluster
09:48 sputnik13 joined #gluster
09:53 rotbeard joined #gluster
09:56 iPancreas joined #gluster
10:01 sputnik13 joined #gluster
10:13 LebedevRI joined #gluster
10:29 nangthang joined #gluster
10:30 sputnik13 joined #gluster
10:54 rjoseph joined #gluster
10:56 iPancreas joined #gluster
11:00 Pupeno joined #gluster
11:38 glusterbot News from newglusterbugs: [Bug 1070539] Very slow Samba Directory Listing when many files or sub-directories <https://bugzilla.redhat.com/show_bug.cgi?id=1070539>
11:53 Sunghost joined #gluster
11:53 nbalacha joined #gluster
11:55 Sunghost Hello, i have a fresh installed system with gfs 6.1 and the problem that i cant mount over fuse on same system. i installed attr and mount but nothing happend.
11:57 iPancreas joined #gluster
12:14 DV joined #gluster
12:18 nangthang joined #gluster
12:57 iPancreas joined #gluster
13:04 merlink joined #gluster
13:43 DV joined #gluster
13:45 kale joined #gluster
13:45 kale hi, when i reboot a server, the gluster daemon does not start again. the servers a connected via dns named, by dynamically updated dns via dhcp
13:46 kale if i log into the server i can just start the daemon with no issue. is this related to dns? i'm on debian/wheezy
13:58 iPancreas joined #gluster
14:00 hagarth joined #gluster
14:24 kale switched to static ips, seems to work fine
14:35 doekia joined #gluster
14:51 rotbeard joined #gluster
14:58 iPancreas joined #gluster
15:06 nangthang joined #gluster
15:59 iPancreas joined #gluster
16:04 n-st joined #gluster
16:39 tom[] i was so scared by the name resolution issue that i chose to only use static ips in gluster
16:40 tom[] ^ @kale
16:40 tom[] and too found that it works fine
16:59 iPancreas joined #gluster
17:01 rjoseph joined #gluster
17:33 plarsen joined #gluster
18:00 iPancreas joined #gluster
18:25 social joined #gluster
18:45 plarsen joined #gluster
19:00 iPancreas joined #gluster
19:16 PatNarciso @kale - likely a resolution issue.  check the logs for confirmation.  if you're using named resolution, consider leveraging  /etc/hosts, [part of the common setup]
19:16 glusterbot PatNarciso: Error: You don't have the admin capability. If you think that you should have this capability, be sure that you are identified before trying again. The 'whoami' command can tell you if you're identified.
19:17 PatNarciso mic check.
19:32 Sunghost joined #gluster
19:51 ubungu joined #gluster
20:01 iPancreas joined #gluster
20:26 DV joined #gluster
20:41 plarsen joined #gluster
20:42 Sunghost Hello, i have a fresh installed debian7 with clusterfs 3.6.1 in distributed. the problem is that i cant mount the clustervol on the server - no error message no mount
20:43 ama Can you mount it via NFS, Sunghost?
20:46 Sunghost not tried, but found a solution with env and language us_US
20:48 Sunghost it works before in version 3.5.2
20:48 Sunghost i now tried from one client with 3.5.2 and i could mount without problems
20:48 Sunghost before i installed the new server with 3.6.1 i had 3.5.2 and it works always without problems
20:49 Sunghost i used now this command env -i LC_NUMERIC="en_US.UTF-8" mount -t glusterfs -o transport=tcp node01:vol2 /mnt/vol2
20:49 Sunghost after 2 or 3 enter this command it get mounts
20:49 Sunghost but not on first time
20:51 Sunghost wait got a log error message
20:52 Sunghost Server and Client lk-version numbers are not same, reopening the fds
20:52 glusterbot Sunghost: This is normal behavior and can safely be ignored.
20:52 Sunghost [2015-01-03 23:42:39.378371] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (15), shutting down
20:52 glusterbot Sunghost: ('s karma is now -55
20:52 Sunghost [2015-01-03 23:46:37.722913] I [MSGID: 100030] [glusterfsd.c:2018:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.6.1 (args: /usr/sbin/glusterfs --volfile-server=node01 --volfile-server-transport=tcp --volfile-id=vol1 /mnt/vol1)
20:52 Sunghost [2015-01-03 23:46:37.731867] I [dht-shared.c:337:dht_init_regex] 0-vol1-dht: using regex rsync-hash-regex = ^\.(.+)\.[^.]+$
20:52 Sunghost [2015-01-03 23:46:37.733487] I [client.c:2280:notify] 0-vol1-client-0: parent translators are ready, attempting connect on transport
20:52 Sunghost [2015-01-03 23:46:37.734510] I [client.c:2280:notify] 0-vol1-client-1: parent translators are ready, attempting connect on transport
20:52 Sunghost Final graph:
20:52 Sunghost +------------------------------------------------------------------------------+
20:52 glusterbot Sunghost: +----------------------------------------------------------------------------'s karma is now -1
20:52 Sunghost [2015-01-03 23:46:37.736759] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-vol1-client-0: changing port to 49152 (from 0)
20:52 Sunghost [2015-01-03 23:46:37.736836] I [rpc-clnt.c:1761:rpc_clnt_reconfig] 0-vol1-client-1: changing port to 49153 (from 0)
20:52 Sunghost [2015-01-03 23:46:37.739712] I [client-handshake.c:1415:select_server_supported_programs] 0-vol1-client-1: Using Program GlusterFS 3.3, Num (1298437), Version (330)
20:52 Sunghost [2015-01-03 23:46:37.739944] I [client-handshake.c:1415:select_server_supported_programs] 0-vol1-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
20:52 Sunghost [2015-01-03 23:46:37.740252] W [client-handshake.c:1109:client_setvolume_cbk] 0-vol1-client-1: failed to set the volume (Permission denied)
20:52 Sunghost [2015-01-03 23:46:37.740288] W [client-handshake.c:1135:client_setvolume_cbk] 0-vol1-client-1: failed to get 'process-uuid' from reply dict
20:52 Sunghost [2015-01-03 23:46:37.740309] E [client-handshake.c:1141:client_setvolume_cbk] 0-vol1-client-1: SETVOLUME on remote-host failed: Authentication failed
20:52 Sunghost [2015-01-03 23:46:37.740329] I [client-handshake.c:1227:client_setvolume_cbk] 0-vol1-client-1: sending AUTH_FAILED event
20:52 Sunghost [2015-01-03 23:46:37.740359] E [fuse-bridge.c:5145:notify] 0-fuse: Server authenication failed. Shutting down.
20:52 Sunghost [2015-01-03 23:46:37.740382] I [fuse-bridge.c:5599:fini] 0-fuse: Unmounting '/mnt/vol1'.
20:52 Sunghost [2015-01-03 23:46:37.750406] W [glusterfsd.c:1194:cleanup_and_exit] (--> 0-: received signum (15), shutting down
20:52 glusterbot Sunghost: ('s karma is now -56
20:56 Sunghost as i wrote mount only works if i use command above - with normal mount.glusterfs or mount -t glusterfs i got no mount and no error
21:01 iPancreas joined #gluster
21:04 Sunghost any idea - sorry for the many lines
21:19 Sunghost i tried nfs before and had bad performance
21:19 Sunghost so any idea to that problem, which looks i am not the only one who have this
21:27 ama Can you mount it via NFS, Sunghost?  I have the same proble, but it can mount via NFS.
21:28 Sunghost to test this i have to install all for nfs ;(. but it must work on fuse, or not since 3.6x? nfs is no option
21:30 ama What do you mean by install all for nfs?  Just try mounting your volume via NFS and see if it does it or not.
21:32 Sunghost package nfs-common ;)
21:33 Sunghost ok nfs works
21:34 Sunghost no message in mnt log - should there be an positiv nfs log entry?
21:34 Sunghost and what is with fuse? ;)
21:34 Sunghost bug?
21:35 Sunghost any performance tuning option for nfs?
21:35 ama No idea, I'm having the same problem for the last few days, I've asked, but nobody seems to know anything.
21:37 Sunghost ah ok same to me, i only found some other in web with solution to mount with language env like i wrote above
21:37 Sunghost what your experience with nfs mount against fuse - i only used fuse since today and a few tests in the past
21:38 Sunghost and thanks for that tip ;)
21:38 ama None, I don't use fuse.
21:43 Sunghost Confused?! You say you have the same problems, or not? So mount with glusterfs-client is mount via fuse or i am wrong?
21:44 ama Yes, I do have the same problem.  Mounting via GlusterFS own protocol doesn't work, but mounting through NFS (native, not fuse) works and seems to be fine.
21:47 Sunghost ok i now mount like this mount -t nfs volserver:vol /mnt/vol <- is that via fuse or native?
21:47 Sunghost and am i right that mount via mount.glusterfs is with fuse?
21:48 ama I believe it's native, no fuse involved.
21:48 ama I don't know about GlusterFS own protocol, though.
21:49 Sunghost ok thanks
21:49 ama I hope it isn't fuse, but I can't tell for sure.
21:58 merlink joined #gluster
22:02 iPancreas joined #gluster
22:08 Sunghost any performance option you use?
22:10 ama Me?  No, I'm trying to recover the data out of three volumes I had before my system disk broke.
22:15 badone joined #gluster
22:16 Sunghost know that too - bad thing - i could and lost over 30tb data
22:16 Sunghost much luck
22:18 ama Thanks.  My data is intact, all bricks are fine.  Only the system disk broke and so I need to recreate the volumes, but I'm having some issues to access the data properly, it seems.
22:20 Sunghost oh ok
22:20 iPancreas joined #gluster
22:20 Sunghost perhaps some xattr wonder thing ;)
22:26 Sunghost i actualy migrate data from vol1 to vol2 (new brick), while vol1 has lost last brick and the problem is that i cant copy new files on new vol2 - rebalance doesnt work
22:28 badone_ joined #gluster
22:33 kale left #gluster
22:44 plarsen joined #gluster
22:48 Sunghost i have 2 bricks in distributed one brick is full and i run rebalance but it doesnt move the files to the other brick - what can i do?
22:58 Pupeno_ joined #gluster
23:06 systemonkey joined #gluster
23:11 glusterbot News from newglusterbugs: [Bug 1065639] Crash in nfs with encryption enabled <https://bugzilla.redhat.com/show_bug.cgi?id=1065639>
23:11 glusterbot News from resolvedglusterbugs: [Bug 1030058] [FEAT] implement encryption support in glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1030058>
23:12 Pupeno joined #gluster
23:13 primusinterpares joined #gluster
23:41 glusterbot News from newglusterbugs: [Bug 1116782] Please add runtime option to show compile time configuration <https://bugzilla.redhat.com/show_bug.cgi?id=1116782>
23:41 glusterbot News from newglusterbugs: [Bug 959069] A single brick down of a dist-rep volume  results in geo-rep session "faulty" <https://bugzilla.redhat.com/show_bug.cgi?id=959069>
23:41 glusterbot News from newglusterbugs: [Bug 1026291] quota: directory limit cross, while creating data in subdirs <https://bugzilla.redhat.com/show_bug.cgi?id=1026291>
23:41 glusterbot News from newglusterbugs: [Bug 1094119] Remove replace-brick with data migration support from gluster cli <https://bugzilla.redhat.com/show_bug.cgi?id=1094119>
23:59 Pupeno_ joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary