Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-11-09

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwang joined #gluster
00:11 zhangjn joined #gluster
00:42 mlhamburg joined #gluster
00:56 zhangjn joined #gluster
00:57 zhangjn joined #gluster
01:01 haomaiwa_ joined #gluster
01:02 zhangjn_ joined #gluster
01:04 EinstCrazy joined #gluster
01:04 EinstCrazy joined #gluster
01:11 shaunm joined #gluster
01:31 Pupeno joined #gluster
01:32 Lee1092 joined #gluster
01:34 gildub joined #gluster
01:47 johnmark joined #gluster
02:11 nangthang joined #gluster
02:16 haomaiwang joined #gluster
02:19 haomaiwa_ joined #gluster
02:19 haomaiwa_ joined #gluster
02:22 haomaiwa_ joined #gluster
02:38 sage joined #gluster
02:41 davidself joined #gluster
02:45 haomaiwa_ joined #gluster
03:01 haomaiwa_ joined #gluster
03:02 kdhananjay joined #gluster
03:11 DV joined #gluster
03:25 DV_ joined #gluster
03:29 sakshi joined #gluster
03:32 Manikandan joined #gluster
03:33 overclk_ joined #gluster
03:44 itisravi joined #gluster
03:45 itisravi joined #gluster
03:48 itisravi joined #gluster
03:51 gem joined #gluster
03:52 arcolife joined #gluster
03:53 atinm joined #gluster
03:54 nbalacha joined #gluster
03:55 bkunal joined #gluster
03:58 zhangjn joined #gluster
04:00 zhangjn joined #gluster
04:01 shaunm joined #gluster
04:01 haomaiwang joined #gluster
04:02 dgbaley joined #gluster
04:06 kdhananjay joined #gluster
04:08 kdhananjay joined #gluster
04:22 kshlm joined #gluster
04:25 [7] joined #gluster
04:27 vimal joined #gluster
04:32 kotreshhr joined #gluster
04:32 kotreshhr left #gluster
04:35 SpiceMan joined #gluster
04:35 zhangjn joined #gluster
04:36 SpiceMan uhm. where can I find the ports for 3.7? http://www.gluster.org/community/documentat​ion/index.php/Basic_Gluster_Troubleshooting
04:36 SpiceMan seems to be using 65534 and 65533 here, but can't find an exact list
04:39 SpiceMan (nor a place to configure this? o.O)
04:41 Humble joined #gluster
04:49 ramteid joined #gluster
04:54 atinm SpiceMan, Are you talking about client<-> connection ports?
04:55 atinm SpiceMan, if so then it ranges from 49152-65536
04:56 overclk joined #gluster
05:01 17SAD0UKG joined #gluster
05:01 haomaiwang joined #gluster
05:02 kanagaraj joined #gluster
05:03 SpiceMan atinm: uhm, no. I just started two servers and just peer probed each other. http://pastie.org/10539992  < listening at port 65534
05:03 glusterbot Title: #10539992 - Pastie (at pastie.org)
05:03 SpiceMan the other server listens at 65533
05:03 SpiceMan no volumes, so no bricks at all yet
05:03 SpiceMan I find no reference whatsoever to needing to listen like that
05:06 hgowtham_ joined #gluster
05:08 ndarshan joined #gluster
05:09 pppp joined #gluster
05:13 SpiceMan no, ignore me. what I said didn't make sense.
05:13 * SpiceMan needs to sleep
05:18 overclk_ joined #gluster
05:19 rafi joined #gluster
05:29 dusmant joined #gluster
05:30 SpiceMan uhm. http://pastie.org/pastes/10540037/text   < wth? I created a volume in that path, then gluster volume deleted it. what am I missing?
05:30 glusterbot Title: #10540037 - Pastie (at pastie.org)
05:32 SpiceMan nvm, googled it
05:38 dusmant joined #gluster
05:41 atinm SpiceMan, give me some time, will look at it
05:41 deepakcs joined #gluster
05:43 skoduri joined #gluster
05:48 karnan joined #gluster
05:49 R0ok_ joined #gluster
05:55 Apeksha joined #gluster
05:55 kotreshhr joined #gluster
05:56 vmallika joined #gluster
05:59 ashiq joined #gluster
06:01 zhangjn_ joined #gluster
06:01 haomaiwa_ joined #gluster
06:04 aravindavk joined #gluster
06:20 nishanth joined #gluster
06:21 spalai joined #gluster
06:26 zhangjn joined #gluster
06:27 hagarth joined #gluster
06:28 ramky joined #gluster
06:29 shubhendu joined #gluster
06:35 bhuddah joined #gluster
06:35 zhangjn joined #gluster
06:36 EinstCrazy joined #gluster
06:37 nangthang joined #gluster
06:38 ppai joined #gluster
06:44 zhangjn joined #gluster
06:49 atalur_ joined #gluster
07:01 haomaiwa_ joined #gluster
07:06 EinstCrazy joined #gluster
07:15 atinm SpiceMan, to answer to your second question, it seems like the brick path is already used by a gluster volume, probably you've not cleaned your old setup, a 'force' at the end of volume create will be able to create the volume here
07:18 TvL2386 joined #gluster
07:18 zhangjn joined #gluster
07:23 mmckeen joined #gluster
07:25 jvandewege joined #gluster
07:26 jtux joined #gluster
07:26 zhangjn joined #gluster
07:27 EinstCrazy joined #gluster
07:30 mmckeen joined #gluster
07:42 mmckeen joined #gluster
07:53 zhangjn joined #gluster
08:01 haomaiwa_ joined #gluster
08:03 zhangjn joined #gluster
08:03 overclk joined #gluster
08:16 [Enrico] joined #gluster
08:19 kdhananjay1 joined #gluster
08:23 kdhananjay joined #gluster
08:28 poornimag joined #gluster
08:30 [Enrico] joined #gluster
08:33 EinstCrazy joined #gluster
08:33 zhangjn joined #gluster
08:33 kovshenin joined #gluster
08:36 gildub joined #gluster
08:36 ivan_rossi joined #gluster
08:37 zhangjn joined #gluster
08:38 kovsheni_ joined #gluster
08:40 mlhamburg1 joined #gluster
08:52 overclk joined #gluster
08:58 Philambdo joined #gluster
09:01 haomaiwa_ joined #gluster
09:05 muneerse joined #gluster
09:05 mbukatov joined #gluster
09:11 LebedevRI joined #gluster
09:11 DV__ joined #gluster
09:14 Slashman joined #gluster
09:26 _shaps_ joined #gluster
09:31 muneerse2 joined #gluster
09:33 jvandewege joined #gluster
09:37 muneerse joined #gluster
09:43 mhulsman joined #gluster
09:45 mhulsman joined #gluster
09:55 Manikandan joined #gluster
10:01 haomaiwa_ joined #gluster
10:07 Trefex joined #gluster
10:16 Manikandan joined #gluster
10:33 Pupeno joined #gluster
10:35 gem joined #gluster
11:01 18WAA3WHA joined #gluster
11:18 lorddoskias joined #gluster
11:19 lorddoskias hi, when i try to pass fuse options when mounting glusterfs i get "Invalid options: fuse-opt"
11:19 kdhananjay joined #gluster
11:19 lorddoskias how do i fix that ?
11:25 kdhananjay joined #gluster
11:33 sankarshan joined #gluster
11:35 kdhananjay1 joined #gluster
11:36 zhangjn joined #gluster
11:36 zhangjn joined #gluster
11:37 EinstCrazy joined #gluster
11:38 EinstCrazy joined #gluster
11:42 Manikandan joined #gluster
11:43 kdhananjay joined #gluster
11:43 kovshenin joined #gluster
11:48 rafi joined #gluster
11:48 kdhananjay joined #gluster
11:49 kovshenin joined #gluster
11:49 kdhananjay joined #gluster
12:03 spalai left #gluster
12:04 lpabon joined #gluster
12:05 hos7ein joined #gluster
12:07 vmallika joined #gluster
12:12 kotreshhr left #gluster
12:12 kotreshhr joined #gluster
12:13 kotreshhr left #gluster
12:16 Philambdo joined #gluster
12:19 ppai joined #gluster
12:22 skoduri joined #gluster
12:27 Philambdo joined #gluster
12:27 Vaelatern joined #gluster
12:29 shubhendu joined #gluster
12:31 overclk joined #gluster
12:31 Manikandan joined #gluster
12:40 volga629 joined #gluster
12:40 volga629 Hello Everyone, what this error Staging failed on vg0.networklab.lan. Error: Volume name get failed ?
12:42 shyam joined #gluster
12:43 Mr_Psmith joined #gluster
12:58 ppai joined #gluster
13:04 unclemarc joined #gluster
13:07 skylar1 joined #gluster
13:10 hagarth joined #gluster
13:11 Pupeno joined #gluster
13:13 haomaiwang joined #gluster
13:15 David_Varghese joined #gluster
13:17 jmarley joined #gluster
13:18 bluenemo joined #gluster
13:27 marcoc_ joined #gluster
13:34 marcoc_ Hi all, sorry, someone can help me with http://www.gluster.org/pipermail/glu​ster-users/2015-October/024121.html ?
13:34 shyam joined #gluster
13:34 glusterbot Title: [Gluster-users] Missing files after add new bricks and remove old ones - how to restore files (at www.gluster.org)
13:36 rafi joined #gluster
13:40 marcoc_ I have some 2x2 Distributed-Replicate volume. I need to free a node. Which is the correct procedure to migrate the brick without losing files (like it happened), with gluster 3.7.5?
13:40 ppai joined #gluster
13:45 hgowtham_ joined #gluster
13:47 bowhunter joined #gluster
13:49 gb21 joined #gluster
13:52 skoduri joined #gluster
13:52 overclk joined #gluster
13:57 plarsen joined #gluster
13:57 tSilenzio joined #gluster
13:58 tSilenzio Hey, not sure if this is the correct channel but I wanted to know if someone could help me with a weird issue I am experiencing?
13:59 B21956 joined #gluster
14:01 ppai joined #gluster
14:01 haomaiwa_ joined #gluster
14:02 tSilenzio I have two EC2 server's setup using Ubuntu 14.04. In the hosts file I reference the other box on each. I port forwarded internally between the two [111, 2049, 24007 - 24008, 38465 - 38467, 49152 - 49153] for TCP and [111] for UDP. Whenever I attempt to sync them up using "sudo gluster peer probe gluster1.mydomain.com" I get the message: peer probe: success. Host gluster1.git.yashi.com port 24007 already in peer list
14:02 tSilenzio however the program crashes
14:02 tSilenzio as evident by running: "sudo /etc/init.d/glusterfs-server status"
14:03 tSilenzio I am using glusterfs 3.7.5 built on Oct  9 2015 06:58:24
14:04 mlncn joined #gluster
14:04 volga629 joined #gluster
14:04 volga629 Hello Everyone, what is mean vol status : FAILED : Staging failed on vg0.networklab.lan. Error: Volume name get failed
14:04 ndevos volga629: that can happen when you execute some gluster commands on multiple systems at the same time
14:05 B21956 left #gluster
14:05 volga629 that return from gluster vol status
14:05 bennyturns joined #gluster
14:05 volga629 can't restore volume
14:05 chirino joined #gluster
14:06 atrius joined #gluster
14:06 ndevos volga629: oh, also make sure that you have the same version of glusterd running on all systems, I think there was an issue with updating that can cause this too
14:07 nbalacha joined #gluster
14:09 nbvfuel joined #gluster
14:11 volga629 let me check
14:11 volga629 I have some error in log
14:11 volga629 let me paste
14:11 ira joined #gluster
14:12 volga629 http://fpaste.org/288462/70782851/
14:12 glusterbot Title: #288462 Fedora Project Pastebin (at fpaste.org)
14:13 nbvfuel We're on gluster 3.5.2-- Recently used the 'aws' cli tool to sync data (mounted from gluster) to S3.  That sync process created the .glusterfs guids in the brick file system, some of which our sophos AV marked as viruses (users!)
14:13 glusterbot nbvfuel: 3.5.2's karma is now -1
14:15 nbvfuel I'm wondering if it's a bad thing that Sophos is looking at the brick metadata dir, and potentially messing with something?
14:17 tSilenzio is glusterfs 3.7 considered stable? should I try downgrading?
14:17 julim joined #gluster
14:20 haomaiwang joined #gluster
14:21 ndevos nbvfuel: yeah, it is best to have a virus scanner only access mounted volumes, that way it can actually take actions if there are infected files
14:23 kovshenin joined #gluster
14:23 nbvfuel ndevos: Thanks.  The virus scanner changes the ownership of the file to root (the guids), kills the the perms, and then creates a backup file with the correct ownership/permssions: df959873-b7b2-40c0-a649-3e6​3ce428b31.permission-backup
14:23 nbvfuel Will these "hurt" glusters view of the world?  Should I attempt to clean them up?
14:24 ndevos volga629: I think it is the "known issue" mentioned in https://github.com/gluster/glusterfs/b​lob/v3.7.6/doc/release-notes/3.7.6.md
14:24 glusterbot Title: glusterfs/3.7.6.md at v3.7.6 · gluster/glusterfs · GitHub (at github.com)
14:25 volga629 thank you version miss match, volume is back online  , where I can find update doc for 3.7 to enable nfs support
14:29 shubhendu joined #gluster
14:30 rafi joined #gluster
14:33 plarsen joined #gluster
14:34 hamiller joined #gluster
14:38 overclk_ joined #gluster
14:39 arcolife joined #gluster
14:41 rafi joined #gluster
14:41 ira joined #gluster
14:42 rafi1 joined #gluster
14:44 shubhendu joined #gluster
14:50 hgowtham_ joined #gluster
14:56 dgandhi joined #gluster
14:57 dgandhi joined #gluster
14:58 dgandhi joined #gluster
14:58 gb21 joined #gluster
14:59 dgandhi joined #gluster
14:59 LebedevRI joined #gluster
15:00 dgandhi joined #gluster
15:01 dgandhi joined #gluster
15:01 itisravi joined #gluster
15:01 haomaiwa_ joined #gluster
15:02 ndevos nbvfuel: the files under the .glusterfs/ directory are hard-links (or symlinks for directories), if files get added there with some other name, they will occupy space that can not be free'd through a mountpoint because gluster does not know about them
15:02 nbvfuel ndevos: Makes sense-- it's a small number, but it sounds like it shouldn't cause an issue?
15:02 glusterbot nbvfuel: sense's karma is now -1
15:02 dgandhi joined #gluster
15:03 ndevos nbvfuel: it depends a little on how the virus scanner does it, is the adding of the suffix a rename? that would be bad
15:04 dgandhi joined #gluster
15:04 ndevos nbvfuel: you can check with "ls -li $FILE" to see if the 2nd column has "1" as a value, that would mean the hardlink that Gluster expects is broken
15:05 skylar joined #gluster
15:05 dgandhi joined #gluster
15:05 ndevos nbvfuel: check that directly on the brick, the file with the new suffix should have "1", the correct GFID file should have "2" (or more)
15:05 dgandhi joined #gluster
15:05 Pupeno_ joined #gluster
15:06 ndevos volga629: if you want to use NFSv3, that is enabled by default, just like in previous versions
15:07 ndevos volga629: if you want to use NFS-Ganesha, this document would be a good starting point: https://github.com/thotz/glusterdocs/b​lob/master/Administrator%20Guide/NFS-G​anesha%20GlusterFS%20Intergration.md
15:07 glusterbot Title: glusterdocs/NFS-Ganesha GlusterFS Intergration.md at master · thotz/glusterdocs · GitHub (at github.com)
15:14 poornimag joined #gluster
15:15 volga629 thank you
15:16 nbvfuel ndevos: You're correct the original file has a '2' the extra guid.permission-backup file is a '1'.
15:18 gothos joined #gluster
15:18 Manikandan joined #gluster
15:18 ndevos nbvfuel: thats a serious breakage then, and should get corrected, depending on how you access the file through the volume, it can be either one of them
15:22 nbvfuel ndevos: Here's ls output: https://pastee.org/6yf4x  What would I do to "correct" it?
15:22 glusterbot Title: Paste: 6yf4x (at pastee.org)
15:24 DV joined #gluster
15:29 maserati joined #gluster
15:29 ndevos nbvfuel: if the only difference is the permissions, I would change them back and delete the .permission-backup file
15:29 Pupeno joined #gluster
15:32 ndevos nbvfuel: the GFID file should have a "2" in "ls -li" for the link-count, that is the important part
15:33 ndevos nbvfuel: in the case of the above paste, the new .permission-backup is just never used by gluster, nobody could access that through a volume
15:33 deniszh joined #gluster
15:34 ndevos nbvfuel: so, you're pretty safe, it was not possible that someone wrote unintentional to the .permission-backup file
15:36 nbvfuel ndevos: Great-- thanks for the extra info.  Since these are files that don't want anyway-- the order of operations to fix this should be: 1) Configure AV to NOT scan base brick file system, 2) Fix permissions on files and remove .permission-backup, 3) Delete as normal via the mount point.
15:36 glusterbot nbvfuel: Great's karma is now -1
15:37 glusterbot nbvfuel: anyway's karma is now -1
15:39 ndevos nbvfuel: yes, that sounds like a good procedure to me too :)
15:46 ayma joined #gluster
15:53 overclk joined #gluster
15:53 Doyle joined #gluster
15:54 Akee joined #gluster
15:57 nbvfuel ndevos: One problem with that plan-- I know the effected files by their .glusterfs/guid-path, how can I translate that back to their "real" file names as available via the normal mount point?
15:57 glusterbot nbvfuel: plan's karma is now -1
15:57 nbvfuel UGH.  I need to stop doing dash dash.
15:58 ndevos nbvfuel: you cant, sorry, you will need to change those permissions on all of the bricks
15:58 ndevos nbvfuel: well, maybe you can see the actual filename on a mounted volume under the .gfid/ directory
15:59 ndevos nbvfuel: uh, I mean, set the permissions through the .gfid/ directory instead of doing that on the bricks
16:00 nbvfuel ndevos: I'm not sure I follow.  The mounted gluster vols don't have a .gfid
16:01 ron-slc_ joined #gluster
16:01 ndevos nbvfuel: ah, maybe enabling the .gfid/ directory requires some volume option, I dont remember exactly
16:01 haomaiwa_ joined #gluster
16:02 wushudoin joined #gluster
16:03 jwd joined #gluster
16:13 Eychenz joined #gluster
16:14 Eychenz joined #gluster
16:15 overclk joined #gluster
16:16 kanagaraj joined #gluster
16:21 jiffin joined #gluster
16:22 DV_ joined #gluster
16:22 ira joined #gluster
16:24 kotreshhr joined #gluster
16:48 nbvfuel_ joined #gluster
16:49 nbvfuel_ ndevos: I found this utility.  It looks like it would be helpful in finding the original file: https://gist.github.com/semiosis/4392640
16:49 glusterbot 'Title: Glusterfs GFID Resolver\r \r Turns a GFID into a real path in the brick \xc2\xb7 GitHub (at gist.github.com)'
17:01 haomaiwa_ joined #gluster
17:02 mhulsman joined #gluster
17:03 p0rtal joined #gluster
17:05 ivan_rossi left #gluster
17:08 overclk joined #gluster
17:13 mlncn joined #gluster
17:13 calavera joined #gluster
17:17 rafi joined #gluster
17:22 p0rtal joined #gluster
17:32 rafi joined #gluster
17:34 p0rtal joined #gluster
17:36 kotreshhr left #gluster
17:36 nangthang joined #gluster
17:54 F2Knight joined #gluster
17:58 Rapture joined #gluster
18:01 haomaiwa_ joined #gluster
18:01 Philambdo1 joined #gluster
18:12 Philambdo1 joined #gluster
18:16 vimal joined #gluster
18:16 p0rtal joined #gluster
18:36 rafi joined #gluster
18:38 p0rtal joined #gluster
18:42 hagarth joined #gluster
18:51 B21956 joined #gluster
18:52 cliluw joined #gluster
18:57 shaunm joined #gluster
19:02 haomaiwang joined #gluster
19:32 EinstCrazy joined #gluster
19:49 RedW joined #gluster
19:55 turkleton joined #gluster
19:55 turkleton Does anyone here know how to restart GNFS? Is it just disable and re-enable?
19:57 ndevos turkleton: if you want to restart it on all your storage servers, I would do "gluster volume set $VOLUME nfs.disable true ; gluster volume reset $VOLUME nfs.disable"
19:58 ndevos turkleton: if you want to restart it on one server: gluster volume status ; kill $PID ; systemctl restart glusterd
20:00 turkleton Interesting, GNFS is showing started, but there is an empty exports list
20:01 haomaiwa_ joined #gluster
20:03 deniszh joined #gluster
20:03 turkleton Also, I'm running Ubuntu 14.04, so I don't have systemd. I have two init files, one in /etc/init/glusterfs-server.conf and one in /etc/init.d/glusterfs-server. Ended up killing all GFS servers, starting it, stopping and starting the volume, and exports list is still empty
20:04 turkleton Some additional information is that we're working on building in automated recovery of Gluster nodes at AWS
20:04 turkleton So I killed one, and I'm intentionally in a half-way state where the peer is gone so I can attempt to write a file over NFS and see if it replicates
20:10 rafi joined #gluster
20:11 * PatNarciso likes turkleton's half-way state test.
20:11 turkleton We're doing our best to be diligent. :)
20:11 DV joined #gluster
20:11 turkleton For some reason NFS claims to be started, I'm going to bring it back to a "fixed" state and see if NFS will show the exports then.
20:14 * turkleton rages.
20:14 turkleton Once I brought the second node up, NFS started working again
20:15 turkleton I'm going to fully provision them and see if it's due to some other oddity
20:16 bluenemo joined #gluster
20:35 turkleton So, my half-way state test worked :)
20:36 turkleton I created a dir over NFS after mounting it via NFS, NFS didn't go away (not sure why it went away earlier), and then brought the second node up which autohealed and created my dir
20:36 turkleton WHEEEEEEEE
20:40 timotheus1 joined #gluster
20:45 ndevos turkleton: you need to make sure that there no nfs services from the OS/kernel have been started, you can only run one set of NFS services (gNFS *or* kernel)
20:46 ndevos turkleton: that also includes the kernel nfs-client, you should not mount anything over nfs on a gluster storage server
20:56 ira joined #gluster
21:01 haomaiwa_ joined #gluster
21:05 csim joined #gluster
21:09 lpabon joined #gluster
21:14 calavera joined #gluster
21:24 p0rtal joined #gluster
21:25 cholcombe joined #gluster
21:28 deniszh joined #gluster
21:28 nbvfuel joined #gluster
21:29 diegows joined #gluster
21:32 dblack joined #gluster
21:32 Pharaoh_Atem joined #gluster
21:32 Pharaoh_Atem hey, did someone delete http://download.gluster.org/pub/gluste​r/glusterfs/LATEST/EPEL.repo/pub.key?
21:33 Pharaoh_Atem it's causing my attempts to install oVirt to fail because Yum can't retrieve the GPG key for glusterfs
21:52 deboerm joined #gluster
21:52 deboerm_ joined #gluster
21:53 deboerm Is there a reason im getting a 404 error on the pub.key for LATEST/EPEL.repo?
21:59 deboerm left #gluster
22:01 haomaiwa_ joined #gluster
22:03 turkleton ndevos: I checked that. We're not installing NFS at all. The second node coming up caused GNFS to show the exports for some reason
22:08 deboerm_ left #gluster
22:08 DV joined #gluster
22:08 deboerm joined #gluster
22:32 poornimag joined #gluster
22:50 social joined #gluster
22:53 calavera joined #gluster
23:00 shyam joined #gluster
23:02 haomaiwa_ joined #gluster
23:06 Telsin joined #gluster
23:07 amye joined #gluster
23:40 skylar1 joined #gluster
23:52 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary