Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-28

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:26 chirino joined #gluster
00:26 julim joined #gluster
00:41 shdeng joined #gluster
00:53 wadeholler joined #gluster
01:04 hagarth joined #gluster
01:13 hagarth joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:50 d0nn1e joined #gluster
01:54 baojg joined #gluster
02:11 baojg joined #gluster
02:13 harish__ joined #gluster
02:32 shyam left #gluster
02:38 javi404 joined #gluster
02:49 Alghost joined #gluster
02:58 gem joined #gluster
03:09 Lee1092 joined #gluster
03:17 haomaiwang joined #gluster
03:18 haomaiwang joined #gluster
03:23 ppai joined #gluster
03:41 magrawal joined #gluster
04:01 Dogethrower joined #gluster
04:01 RameshN joined #gluster
04:12 itisravi joined #gluster
04:16 itisravi joined #gluster
04:21 kotreshhr joined #gluster
04:23 kramdoss_ joined #gluster
04:30 nehar joined #gluster
04:30 itisravi_ joined #gluster
04:32 poornimag joined #gluster
04:34 shubhendu joined #gluster
04:40 gem joined #gluster
04:43 Apeksha joined #gluster
04:54 ramky joined #gluster
04:56 prasanth joined #gluster
04:58 ppai joined #gluster
05:04 jiffin joined #gluster
05:04 sakshi joined #gluster
05:07 kramdoss_ joined #gluster
05:08 kotreshhr left #gluster
05:11 ndarshan joined #gluster
05:15 Manikandan joined #gluster
05:20 atinm joined #gluster
05:21 Bhaskarakiran joined #gluster
05:22 baojg joined #gluster
05:25 aravindavk joined #gluster
05:34 hgowtham joined #gluster
05:39 Manikandan joined #gluster
05:44 pur joined #gluster
05:50 satya4ever joined #gluster
05:54 kshlm joined #gluster
05:54 skoduri joined #gluster
06:01 ashiq joined #gluster
06:02 [diablo] joined #gluster
06:08 aspandey joined #gluster
06:09 ppai joined #gluster
06:15 gowtham joined #gluster
06:18 karnan joined #gluster
06:20 Saravanakmr joined #gluster
06:20 kdhananjay joined #gluster
06:21 jtux joined #gluster
06:23 nishanth joined #gluster
06:27 itisravi joined #gluster
06:30 rafi joined #gluster
06:33 jtux joined #gluster
06:37 karthik___ joined #gluster
06:38 rafi joined #gluster
06:39 kshlm joined #gluster
06:39 Manikandan joined #gluster
06:40 Manikandan joined #gluster
06:43 prasanth joined #gluster
06:44 rafi joined #gluster
06:44 karnan_ joined #gluster
06:46 rafi joined #gluster
06:48 msvbhat joined #gluster
06:48 rafi joined #gluster
06:53 rafi joined #gluster
06:54 DV joined #gluster
06:58 jri joined #gluster
07:00 atalur joined #gluster
07:09 ivan_rossi joined #gluster
07:10 itisravi joined #gluster
07:15 [Enrico] joined #gluster
07:19 hgowtham joined #gluster
07:19 poornimag joined #gluster
07:24 Philambdo joined #gluster
07:25 ramky joined #gluster
07:37 poornimag joined #gluster
07:39 ahino joined #gluster
07:44 hagarth joined #gluster
08:04 itisravi joined #gluster
08:13 hchiramm joined #gluster
08:14 Dogethrower joined #gluster
08:16 R0ok_ joined #gluster
08:21 hackman joined #gluster
08:23 kshlm joined #gluster
08:24 gowtham joined #gluster
08:25 Slashman joined #gluster
08:25 rafi1 joined #gluster
08:25 rastar joined #gluster
08:27 karthik___ joined #gluster
08:29 rafi joined #gluster
08:36 harish_ joined #gluster
08:45 gowtham joined #gluster
08:45 ivan_rossi left #gluster
08:55 hchiramm joined #gluster
09:01 aravindavk_ joined #gluster
09:03 martinc joined #gluster
09:05 Saravanakmr joined #gluster
09:06 martinc joined #gluster
09:07 martinc joined #gluster
09:09 martinc joined #gluster
09:09 martinc joined #gluster
09:13 martinc joined #gluster
09:23 martinc joined #gluster
09:26 arcolife joined #gluster
09:37 baojg joined #gluster
09:37 itisravi joined #gluster
09:37 kdhananjay joined #gluster
09:45 Seth_Karlo joined #gluster
09:46 baojg joined #gluster
09:49 thatgraemeguy joined #gluster
09:50 aspandey joined #gluster
09:50 atalur joined #gluster
09:58 sakshi joined #gluster
09:58 luizcpg joined #gluster
09:59 poornimag joined #gluster
10:01 Seth_Karlo joined #gluster
10:01 rastar joined #gluster
10:02 guest_id joined #gluster
10:11 guest_id I have create a volume with bd xlator, first it worked well, but after rebooting the LVs was in NOT AVAILBE state, after running  command 'lvchange -ay vg', it gets avaible. but now when mount the volume in client, it does not show the correct size of the volume. it shows the size of posix-directory path in gluster server. Can anyone help me? what is the problem?
10:23 bfoster joined #gluster
10:33 kdhananjay joined #gluster
10:34 itisravi joined #gluster
10:38 msvbhat joined #gluster
10:41 itisravi joined #gluster
10:48 hchiramm joined #gluster
10:51 johnmilton joined #gluster
10:51 martinc how can i change hostnames of peers in gluster?
10:52 Manikandan joined #gluster
10:54 aravindavk_ joined #gluster
10:55 martinc joined #gluster
10:56 jiffin martinc: I am not sure following is right way, may be it will work
10:57 jiffin after changing the hostname in necessary file, just do a peer probe with new hostname
10:57 jiffin it will update properly or it may end up in a mess
10:58 wadeholler joined #gluster
11:00 martinc i can´t afford to make a mess with the server...
11:00 ira joined #gluster
11:01 wadeholler joined #gluster
11:05 aspandey joined #gluster
11:06 Saravanakmr joined #gluster
11:08 ndevos REMINDER: Gluster Bug Triage meeting at 12:00 UTC - http://article.gmane.org/gmane.comp.file-systems.gluster.devel/15726
11:08 glusterbot Title: Gmane -- REMINDER: Gluster Bug Triage starts at 12:00 UTC 1 hour from now (at article.gmane.org)
11:09 martinc joined #gluster
11:11 arcolife joined #gluster
11:12 martinc joined #gluster
11:12 martinc joined #gluster
11:13 hchiramm joined #gluster
11:14 martin_pb joined #gluster
11:15 martin_pb joined #gluster
11:16 martin_pb joined #gluster
11:22 hchiramm joined #gluster
11:30 hchiramm joined #gluster
11:31 rastar joined #gluster
11:31 poornimag joined #gluster
11:47 gem joined #gluster
11:49 rafaels joined #gluster
11:55 rouven joined #gluster
11:56 kshlm joined #gluster
11:57 skoduri joined #gluster
11:59 ndevos REMINDER: Gluster Bug Triage meeting starting now in #gluster-meeting
12:02 skoduri joined #gluster
12:10 karthik___ joined #gluster
12:12 DaSt joined #gluster
12:12 DaSt Hi
12:12 glusterbot DaSt: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:13 DaSt I'm trying to set up glusterfs on an Oracle Linux 6 box. I have no luck mounting the created volume without any error in the log file. I just have a "Mount failed. Please check the log file for more details."
12:14 DaSt gluster volume status shows everything online
12:18 ppai joined #gluster
12:21 jiffin DaSt: : can check the client log file for more clues
12:21 jiffin ?
12:22 DaSt i see no other file changing in the entire var/log dir
12:22 jiffin for fedora like distribution it should be /var/log/glusterfs/<path to mount point.log>
12:22 msvbhat joined #gluster
12:23 DaSt yes that file does evolve but no "E" or "C" line
12:23 DaSt only one warning when glusterfsd is shutting down
12:24 jiffin can u paste the out put in termint/fpaste from the end?
12:24 jiffin say 20 lines
12:24 DaSt yes, one sec plz
12:25 DaSt Here : https://da.gd/F6ca2
12:25 glusterbot Title: #385830 Fedora Project Pastebin (at da.gd)
12:27 rouven_ joined #gluster
12:30 DaSt jiffin: got the link ?
12:30 jiffin DaSt: Checking
12:31 DaSt k thx
12:32 magrawal joined #gluster
12:35 jiffin DaSt: your mount point is /data/data1, right?
12:35 DaSt yes
12:36 jiffin DaSt: seems to be not very much helpful
12:37 DaSt thats what i think too
12:37 jiffin can u try the same with reducing the debug level
12:37 jiffin sorry log level
12:37 DaSt yes
12:38 jiffin gluster v set <volname> diagnostics.client-log-level DEBUG/TRACE
12:44 DaSt here you are : https://da.gd/zHPyi
12:44 glusterbot Title: #385838 Fedora Project Pastebin (at da.gd)
12:47 jiffin DaSt: I guess the issue is this one  D [fuse-bridge.c:4823:fuse_thread_proc] 0-glusterfs-fuse: terminating upon getting ENODEV when reading /dev/fuse
12:48 jiffin DaSt: i have never tried Oracle Linux 6
12:49 DaSt as i couldnt make the public repo gluster package work i used the one from the epel repo
12:49 DaSt and also installed fuse from the same epel repo
12:49 post-factum DaSt: is that hardware linux box or some lxc container?
12:49 DaSt thats a vmware vm
12:50 post-factum shouldn't something like modprobe fuse do the trick?
12:50 DaSt it does not, no :(
12:51 post-factum lsmod | grep fuse, please
12:51 post-factum is the module loaded?
12:51 post-factum and ls -lh /dev/fuse just in case please too
12:52 jiffin post-factum: yeah it seems to be
12:53 post-factum permissions issue maybe?
13:00 ahino1 joined #gluster
13:12 frakt joined #gluster
13:18 DaSt sorry i got delayed by a phone call
13:18 DaSt lsmod | grep fuse returns nothing
13:19 post-factum modinfo fuse, please
13:19 DaSt nvm i was on the wrong srv -_-
13:19 DaSt i do have a fuse module
13:19 post-factum meh
13:19 post-factum ok, and it is loaded?
13:19 post-factum lsmod shows fuse
13:19 DaSt yes
13:19 post-factum what about /dev/fuse entry?
13:19 post-factum ls -lh on it
13:20 DaSt its there
13:20 post-factum show please ls -lh output
13:20 DaSt crw-rw-rw- 1 root root 10, 229 Jun 28 13:55 /dev/fuse
13:21 post-factum and still ENODEV error?
13:21 post-factum selinux, maybe?
13:21 DaSt disabled it first thing even before installing gluster
13:21 post-factum wtf then :)...
13:21 post-factum jiffin: JoeJulian: ideas?
13:22 never2far joined #gluster
13:22 never2far joined #gluster
13:22 never2far joined #gluster
13:22 jiffin post-factum, DaSt: no idea
13:22 post-factum DaSt: maybe, some dmesg errors?
13:23 post-factum dmesg | grep -i fuse or so
13:23 DaSt returns : fuse init (API version 7.20)
13:23 jiffin DaSt: can u try the nfs mount and check whether is it working or not?
13:24 DaSt what nfs mount are u talkin bout ?
13:25 jiffin the volume will be exported via nfs as well
13:25 jiffin just want to check whether issue is related to fuse or not
13:26 DaSt i can mount the share by replacing -t glusterfs by -t nfs
13:26 jiffin hmm
13:27 nishanth joined #gluster
13:27 post-factum https://www.gluster.org/pipermail/gluster-users.old/2013-July/013382.html
13:27 glusterbot Title: [Gluster-users] Mounting replicated volume fails - terminating upon getting ENODEV when reading /dev/fuse (at www.gluster.org)
13:27 post-factum but i see no solution there
13:28 jiffin DaSt: did u use any specific option for the mount ?
13:28 DaSt nop i didnt
13:29 DaSt mount -t glusterfs file1-san-cta-devf:data1 /data/data1
13:30 jiffin|meeting csaba: any ideas?
13:31 DaSt on a forum i found a theory about the version of fuse too recent for my ol6
13:31 DaSt I used glusterfs 3.6.9 package on an OL6.6 x64
13:31 JoeJulian "not working - 3.8.13-26.2.3.el6uek.x86_64" "working uek - 2.6.39-400.17.1.el6uek.x86_64"
13:32 JoeJulian looks like it's an oracle bug...
13:32 DaSt im on 3.8.13-44.1.1.el6uek.x86_64
13:33 DaSt JoeJulian where did you find that piece of info plz ?
13:34 JoeJulian https://botbot.me/freenode/gluster/2014-04-29/?msg=14000841&amp;page=7
13:35 JoeJulian Looks like refrainblue fixed it by switching away from uek
13:36 JoeJulian open a issue with oracle, imho.
13:36 JoeJulian Isn't that what you're paying for?
13:36 DaSt I think it is
13:36 DaSt but they won't be pleased by me using packages from EPEL
13:36 DaSt :)
13:36 DaSt been there
13:37 DaSt if i only could switch away from OL...
13:37 JoeJulian Paying for that and to support their lawyers so they can sue the rest of the Linux community whenever they can.
13:38 DaSt anyway thanks a lot for your precious help
13:38 DaSt i'm gonna dig in that direction
13:39 JoeJulian You're welcome. Good luck. :)
13:41 DaSt thx :)
13:43 Apeksha joined #gluster
13:46 luizcpg joined #gluster
13:47 DaSt joined #gluster
13:47 DaSt Hi back
13:47 DaSt for the record i tried rebooting using the RHEL compatible kernel and it works like a charm
13:48 atalur joined #gluster
13:48 DaSt uname -a
13:50 squizzi joined #gluster
13:51 ashiq joined #gluster
13:51 dnunez joined #gluster
13:54 guhcampos joined #gluster
13:55 MikeLupe joined #gluster
13:57 rafaels joined #gluster
14:03 msvbhat joined #gluster
14:11 dnunez joined #gluster
14:16 jwd joined #gluster
14:24 aravindavk_ joined #gluster
14:44 anoopcs joined #gluster
14:46 rafaels joined #gluster
14:48 karnan joined #gluster
14:50 harish_ joined #gluster
14:54 rouven joined #gluster
14:56 gowtham joined #gluster
14:59 rafaels joined #gluster
15:03 wushudoin joined #gluster
15:05 ic0n joined #gluster
15:08 kpease joined #gluster
15:14 btpier_ joined #gluster
15:17 rafaels joined #gluster
15:27 olisch joined #gluster
15:27 nishanth joined #gluster
15:37 btpier_ left #gluster
16:03 shubhendu joined #gluster
16:10 guhcampos joined #gluster
16:19 hackman joined #gluster
16:29 ctria joined #gluster
16:31 skylar joined #gluster
16:32 Acinonyx joined #gluster
16:33 chirino_m joined #gluster
16:37 skoduri joined #gluster
16:38 rafaels joined #gluster
16:46 d0nn1e joined #gluster
16:56 jwd joined #gluster
16:59 MikeLupe joined #gluster
17:09 yopp joined #gluster
17:13 jiffin joined #gluster
17:20 renout_away joined #gluster
17:30 julim joined #gluster
18:13 ahino joined #gluster
18:29 PsionTheory joined #gluster
18:54 rafaels joined #gluster
19:02 kovshenin joined #gluster
19:04 jiffin joined #gluster
19:05 pampan joined #gluster
19:08 pampan Hi guys. This is a longshot but maybe anyone can provide me with a hint. I'm using 3.5.1, added a new brick but it's not syncing. The logs for the triggered process has this warning: W [client-rpc-fops.c:2772:client3_3_lookup_cbk] gv-volume : remote operation failed: No such file or directory. Path:  (d4bfd168-7094-46fb-934c-1aeb4a2e77a1)
19:09 pampan That volume had 2 bricks, I added a new one. The 3 bricks are online, according to the volume status.
19:09 ben453 joined #gluster
19:10 guhcampos joined #gluster
19:14 post-factum pampan: any reason to use outdated gluster version?
19:15 jiffin joined #gluster
19:15 pampan I'll gladly update, but we can't do it right now without properly testing the new version
19:18 ben453 Does anyone know if gluster will automatically fix mismatched gfids on different bricks if the files are identical?
19:25 wadeholl_ joined #gluster
19:26 wadehol__ joined #gluster
19:50 rouven joined #gluster
19:59 wadeholler joined #gluster
20:17 shyam joined #gluster
20:28 shyam left #gluster
20:29 hagarth joined #gluster
20:40 cloph_away hi  getting tons of warnings in bick's log for a volume that I cannot make much sense of: [2016-06-28 20:39:46.681152] W [posix-acl.c:345:posix_acl_ctx_get] (--> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x1a3)[0x7f2a7a0b9503] (--> /usr/lib/x86_64-linux-gnu/glusterfs/3.7.11/xlator/features/access-control.so(posix_acl_ctx_get+0xbd)[0x7f2a6e86737d] (-->
20:40 cloph_away /usr/lib/x86_64-linux-gnu/glusterfs/3.7.11/xlator/features/access-control.so(posix_acl_setxattr_update+0x42)[0x7f2a6e86d0c2] (--> /usr/lib/x86_64-linux-gnu/glusterfs/3.7.11/xlator/features/access-control.so(posix_acl_setxattr+0xf7)[0x7f2a6e86d2b7] (--> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(default_setxattr+0x75)[0x7f2a7a0bbbf5] ))))) 0-backup-berta-access-control: inode is NULL
20:40 glusterbot cloph_away: ('s karma is now -139
20:41 glusterbot cloph_away: ('s karma is now -140
20:41 glusterbot cloph_away: ('s karma is now -141
20:41 glusterbot cloph_away: ('s karma is now -142
20:41 glusterbot cloph_away: ('s karma is now -143
20:41 cloph_away why is it complaining about acl, when the volume doesn't use acl in the first place - and what inode does it try to use so many times?
20:54 nishanth joined #gluster
21:17 sfrolov joined #gluster
21:17 MikeLupe joined #gluster
21:22 sfrolov Hi guys. I figured that my setup (separate brick for each of hard drives, resulting in 9 bricks per node) is a bad idea, so I would like to rebuild the whole thing and reduce amount of bricks to 1 on each of my 3 nodes. Any advice on how to approach this? Do I just rebuild one with a single brick and try to sync it back it?
21:52 JoeJulian sfrolov: I'm curious what made you decide it was a bad idea? I've used several approaches in production and I keep going back to smaller faster multiple bricks per server.
21:54 pampan Whenever you add a brick to a volume in replication mode... is the synchronization supposed to start immediatly?
21:54 chirino joined #gluster
21:58 JoeJulian More or less.
22:04 johnmilton joined #gluster
22:08 The_Ball joined #gluster
22:09 The_Ball I have three gluster nodes, on the two first gluster volume status lists all three members, on the last server only node1 and localhost is listed, any idea how I can rectify this? I was having issues with peer probing and deleted the /var/lib/gluster folder a few times
22:27 hagarth joined #gluster
22:30 kenansulayman joined #gluster
22:42 btpier joined #gluster
22:42 hagarth joined #gluster
22:57 luizcpg joined #gluster
23:38 luizcpg joined #gluster
23:44 MrAbaddon joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary