Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-09-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:04 bala joined #gluster
00:09 goo_ joined #gluster
00:09 abyss^^_ joined #gluster
00:09 Nuxr0 joined #gluster
00:11 skippy_ joined #gluster
00:11 twx joined #gluster
00:11 saltsa joined #gluster
00:11 jbrooks_ joined #gluster
00:12 Norky_ joined #gluster
00:12 Intensity joined #gluster
00:12 Intensity joined #gluster
00:12 Debolaz joined #gluster
00:12 Debolaz joined #gluster
00:13 al joined #gluster
00:13 avati joined #gluster
00:13 RobertLaptop joined #gluster
00:14 dmyers joined #gluster
00:15 dmyers joined #gluster
00:16 dmyers joined #gluster
00:38 purpleidea JoeJulian: that is sweet... send me the extras ;)
00:38 plarsen joined #gluster
00:50 gildub joined #gluster
01:08 luckyinva joined #gluster
01:19 recidive joined #gluster
01:20 bala joined #gluster
01:28 vimal joined #gluster
01:33 diegows joined #gluster
01:34 djgiggle joined #gluster
01:45 pitterfumped joined #gluster
01:49 an joined #gluster
01:56 Lilian joined #gluster
02:16 bennyturns joined #gluster
02:16 capri joined #gluster
02:20 harish joined #gluster
02:43 hagarth joined #gluster
02:48 Intensity joined #gluster
02:51 staceyfriesen joined #gluster
02:55 staceyfriesen hello there, does anyone have any experience setting up gluster in Rackspace Cloud?
02:58 staceyfriesen There is an article at http://blog.gluster.org/category/rackspace/ however it seems to be missing an important command in the instructions for partioning and formatting
03:02 kshlm joined #gluster
03:12 JoeJulian staceyfriesen: Looks fine to me, more or less. You don't have to partition the cinder volume, so you could just put the filesystem on xvdb
03:13 JoeJulian and we prefer xfs over ext4, but not far enough to make it a problem.
03:14 staceyfriesen yeah the instructions seem good except there is a command missing when the article mentions this "This indicates that our brick needs a partition table and formatting. We can achieve this be doing the following"
03:15 staceyfriesen then no command listed
03:15 staceyfriesen that being said I think I found another article that mentioned this commmand: fdisk /dev/xvdb
03:16 staceyfriesen does that sound right?
03:24 JoeJulian Hah, that's correct. I missed that myself.
03:24 JoeJulian again, you don't have to partition it if you don't want to.
03:25 JoeJulian mkfs -t whatever_fs_type_you_want /dev/xvdb
03:28 shubhendu joined #gluster
03:33 staceyfriesen cool thanks!
03:39 itisravi joined #gluster
03:40 spandit joined #gluster
03:50 bharata-rao joined #gluster
03:57 djgiggle Hi, I was thinking of using my web server as a glusterfs server and another server with bigger space (call it Server B) to serve as a peer. Is this feasible or should I just set the web server as a peer and get Server B to be the glusterfs server? Does it matter which is which?
03:57 nbalachandran joined #gluster
04:14 prasanth_ joined #gluster
04:15 anoopcs joined #gluster
04:16 kshlm joined #gluster
04:17 ndarshan joined #gluster
04:24 hchiramm joined #gluster
04:32 ninjabox1 joined #gluster
04:37 _ndevos joined #gluster
04:37 _ndevos joined #gluster
04:40 rjoseph joined #gluster
04:42 atinmu joined #gluster
04:46 Philambdo joined #gluster
04:55 prasanth_ joined #gluster
05:05 ppai joined #gluster
05:06 rafi joined #gluster
05:07 Rafi_kc joined #gluster
05:09 rafi1 joined #gluster
05:12 harish joined #gluster
05:13 hagarth joined #gluster
05:15 nishanth joined #gluster
05:22 staceyfriesen joined #gluster
05:24 Rafi_kc joined #gluster
05:25 deepakcs joined #gluster
05:26 staceyfriesen hi there, I'm setting up my first replicated volume using the instructions here: http://blog.gluster.org/category/rackspace/ and one thing I am confused about is how to write files to the gluster volume.
05:26 hchiramm joined #gluster
05:27 staceyfriesen I'm trying to use my gluster servers as clients as well, but when I wrote a test file to /glusterfs/brick/myvolume it did not appear in the other server
05:31 nshaikh joined #gluster
05:34 staceyfriesen Can a client and server be on the same physical server?
05:34 kshlm joined #gluster
05:35 Alex Yes
05:35 Alex However please note that not all reads will go to the local server, even in the event of them being identical peers
05:36 Alex or perhaps it's more appropriate to say - they *may* not :)
05:37 staceyfriesen ok... this might be a dumb question but, do I need to set up a volume file for a client even after mounting the volume on the same machine?
05:39 staceyfriesen I skipped the client set up because I thought after mounting the volume for the server, I would be able to write to the volume
05:52 Philambdo joined #gluster
05:53 bala joined #gluster
05:57 ppai joined #gluster
05:58 lalatenduM joined #gluster
05:58 Alex I don't think your question is dumb, but I'm not quite sure what you mean by volume file. Also, you shouldn't write direct to the brick, but through the gluster mount - which might be where you ended up now :)
06:05 staceyfriesen Ok thanks Alex, I think a little more reading on my part is in order... thanks again
06:07 Alex The model we have deployed is two servers, each with a brick, and both mounting through a Gluster FUSE mount. So, reasonably similar to what you're describing I think
06:12 kdhananjay joined #gluster
06:14 atalur joined #gluster
06:20 dusmant joined #gluster
06:21 ppai joined #gluster
06:24 jtux joined #gluster
06:28 itisravi_ joined #gluster
06:29 ws2k33 joined #gluster
06:29 staceyfriesen joined #gluster
06:30 mariusp joined #gluster
06:40 Bardack hi
06:40 glusterbot Bardack: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
06:41 Bardack we have an issue with  our nas (running gluster). i m not the one usually taking care of this, but the  colleague is not yet there :)
06:41 Bardack we have 70% io/wait, meaning the whole nas is screwed atm
06:41 raghu` joined #gluster
06:41 Bardack is there any way to see on which gluster share all those io are going ?
06:41 Bardack with a magic command or so ? :)
06:42 dusmant joined #gluster
06:43 mariusp joined #gluster
06:44 harish joined #gluster
06:45 vimal joined #gluster
06:45 RaSTar joined #gluster
06:48 kanagaraj joined #gluster
06:51 anands joined #gluster
06:52 anands left #gluster
06:54 nshaikh joined #gluster
06:55 aravindavk joined #gluster
06:56 hybrid512 joined #gluster
07:04 VeggieMeat_ joined #gluster
07:04 samsaffron_ joined #gluster
07:05 rjoseph1 joined #gluster
07:07 staceyfriesen joined #gluster
07:07 zerick joined #gluster
07:09 shubhendu_ joined #gluster
07:15 atinmu joined #gluster
07:17 kshlm joined #gluster
07:20 kanagaraj joined #gluster
07:22 getup- joined #gluster
07:25 ricky-ti1 joined #gluster
07:33 rjoseph joined #gluster
07:35 necrogami joined #gluster
07:35 fsimonce joined #gluster
07:43 mick27 joined #gluster
07:44 liquidat joined #gluster
07:58 hybrid512 joined #gluster
08:01 staceyfriesen joined #gluster
08:02 RameshN joined #gluster
08:19 mariusp joined #gluster
08:34 saurabh joined #gluster
08:35 richvdh joined #gluster
08:38 gildub joined #gluster
08:56 staceyfriesen joined #gluster
08:56 vimal joined #gluster
09:00 partner uh, my self-heal daemon eats 80% of the memory on both boxes
09:01 partner also created 100M log file already since logrotation (5,5 hours ago)
09:02 partner seems to be stuck with bunch of same files, all the gfid's are repeated 35 times so far
09:03 partner [2014-09-04 03:34:28.393787] I [afr-self-heal-data.c:655:afr_sh_data_fix] 0-rv0-replicate-2: no active sinks for performing self-heal on file <gfid:7fdaa498-7ea6-4caa-a99e-2dbc8f444bfd>
09:06 getup- joined #gluster
09:07 NigeyS joined #gluster
09:07 NigeyS Morning :)
09:07 NigeyS semiosis my apologies i forgot to thank you for your help with that sftp issue last night, thank you :)
09:16 partner hmph, reading some old posts there does not seem to be much idea why that happens. trying to clear a bit the attributes and see what happens, attributes and file hashes are identical on both side (glusterfs 3.4.5-1 btw)
09:20 kumar joined #gluster
09:24 Slashman joined #gluster
09:27 partner trying to find the real file based on its inode number..
09:32 kaushal_ joined #gluster
09:33 partner found them, all identical on both sides of replica thought i did clean the attribues in between.. ohwell, i have plenty of broken ones still to test with
09:42 partner no log entries of any self-healing thought..
09:42 andreask joined #gluster
09:44 jmarley joined #gluster
09:50 staceyfriesen joined #gluster
09:54 mariusp joined #gluster
10:03 shubhendu joined #gluster
10:05 mariusp joined #gluster
10:11 kalzz joined #gluster
10:12 glusterbot New news from newglusterbugs: [Bug 1129486] adding bricks to mounted volume causes client failures <https://bugzilla.redhat.com/show_bug.cgi?id=1129486>
10:23 ghenry_ joined #gluster
10:24 ghenry joined #gluster
10:33 shubhendu_ joined #gluster
10:36 kkeithley1 joined #gluster
10:41 kkeithley1 joined #gluster
10:42 glusterbot New news from newglusterbugs: [Bug 1123475] Cannot retrieve clients from non-participating server <https://bugzilla.redhat.com/show_bug.cgi?id=1123475> || [Bug 1138229] Disconnections from glusterfs through libgfapi <https://bugzilla.redhat.com/show_bug.cgi?id=1138229> || [Bug 1117822] Tracker bug for GlusterFS 3.6.0 <https://bugzilla.redhat.com/show_bug.cgi?id=1117822>
10:44 staceyfriesen joined #gluster
10:47 andreask joined #gluster
10:48 lkoranda joined #gluster
11:06 calum_ joined #gluster
11:10 mojibake joined #gluster
11:10 hagarth joined #gluster
11:10 mojibake joined #gluster
11:26 Pupeno_ joined #gluster
11:26 geaaru joined #gluster
11:26 ira joined #gluster
11:27 geaaru hi, i'm trying to use glusterfs on my env, but i can't mount volume from remote host, only in localhost (no firewall rules are present). With debug i see this error:
11:27 geaaru 0-management: connection attempt on /var/run/706f334f849b57542063e5c12d70d9b5.socket failed, (Connection refused)
11:28 geaaru i use gluster with systemd , could be relative to a permission problem ?
11:29 geaaru how can i force a different path for socket files ?
11:29 geaaru thanks in advance
11:31 anoopcs geaaru: remote host from which you are trying to mount is accessible, right?
11:32 anoopcs geaaru: Just try a ping to servers
11:32 geaaru anoopcs: yes, I force also a auth.allow rule on volume
11:33 geaaru anoopcs: networking is ok, i can also insert client as pool peer
11:38 ppai joined #gluster
11:38 staceyfriesen joined #gluster
11:39 aravindavk joined #gluster
11:40 firemanxbr joined #gluster
11:44 dusmant joined #gluster
11:46 anoopcs geaaru: Are you trying to fuse mount or nfs?
11:49 DJCl34n joined #gluster
11:49 _weykent joined #gluster
11:50 Alex__ joined #gluster
11:50 jezier joined #gluster
11:51 SpComb joined #gluster
11:51 crashmag_ joined #gluster
11:51 DJClean joined #gluster
11:51 geaaru anoopcs: fuse
11:51 geaaru but is correct that on debug mode I see this messages: [socket.c:2820:socket_connect] 0-management: connection attempt on  failed, (Connection refused) ?
11:54 anoopcs geaaru: Does the mount hangs or fails?
11:56 geaaru anoopcs: mount command fails after timeout, but from gluster node I see connection: [common-utils.c:2806:gf_is_local_addr] 0-management: 172.16.90.30 is not local
11:57 diegows joined #gluster
11:57 geaaru (gluster v. 3.5.1)
11:58 bjornar joined #gluster
11:59 anoopcs geaaru: Any errors in mount log?
11:59 anoopcs Or just warnings?
12:00 geaaru anoopcs: no :'(
12:01 geaaru anoopcs: could be relative to string "is not local" that doesn't permit mount of the client ?
12:03 anoopcs geaaru: Can you explain your volume configuration?
12:03 anoopcs Just to know
12:05 geaaru anoopcs: http://pastebin.com/0jb4nnpB
12:05 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
12:05 recidive joined #gluster
12:07 anoopcs geaaru: how did you mount? did you specify any transport type during mounting?
12:08 geaaru no simple mount -t glusterfs server:/volume-name
12:09 geaaru anoopcs: do you know if is correct this debug message: [socket.c:2820:socket_connect] 0-management: connection attempt on  failed, (Connection refused)
12:09 geaaru maybe problem is relative to an error on configure management volume
12:09 Rafi_kc geaaru, Do you really want to use transport type tcp,rdma
12:10 rolfb joined #gluster
12:10 geaaru transport type tcp,rdma what means? tcp + rdma and only tcp or only use of rdma transport ?
12:11 Rafi_kc can u create a volume without specifying any transport type ?
12:11 geaaru Rafi_kc: i try
12:12 bennyturns joined #gluster
12:12 Rafi_kc i hope you don't have rdma support on your server machines, right ?
12:15 chirino joined #gluster
12:16 geaaru geaaru: currently i'm on a development environment and yes, i don't have rdma support
12:16 geaaru Rafi_kc: new volume with transport tcp but same issue on mount
12:18 geaaru currently, bricks are connected through an interface with vlan , while i try to mount volume from another interface without vlan
12:18 geaaru binding of gluster daemon is on 0.0.0.0 so
12:18 _Bryan_ joined #gluster
12:18 geaaru i think that this is not a problem
12:20 harish joined #gluster
12:20 anoopcs interesting
12:20 Rafi_kc if there is no support for rdma ,then the volume creation should have  failed
12:21 geaaru Rafi_kc: but also with tcp+rdma ?
12:22 deepakcs joined #gluster
12:22 mariusp joined #gluster
12:23 rwheeler joined #gluster
12:24 Rafi_kc geaaru, yes it will
12:25 geaaru could be helpful downgrade to 3.4.x release ?
12:26 anoopcs geaaru: I don't think a downgrade is necessary at this moment. Let's try to figure out the cause
12:29 anoopcs Checking...
12:29 mariusp joined #gluster
12:29 rwheeler joined #gluster
12:30 geaaru anoopcs: ok, i try to see what happens with wireshark
12:31 jiku joined #gluster
12:32 lalatenduM joined #gluster
12:33 staceyfriesen joined #gluster
12:37 delhage joined #gluster
12:37 LHinson joined #gluster
12:40 LHinson1 joined #gluster
12:47 geaaru anoopcs Rafi_kc: maybe I understood. With wireshark I see that on handshake phase (after mount command) command GETSPEC Call return a describe of volumes that contains remote-host options relative to vlan subnet (that is not reachable by client)
12:47 geaaru probably is this cause of the problem
12:48 suliba joined #gluster
12:49 atalur joined #gluster
12:49 Rafi_kc joined #gluster
12:49 cristov joined #gluster
12:50 geaaru if i create a volume file where remote-host ip has ip of the subnet that is reachable from client then volume description returned on handshake is ignored ?
12:52 aravindavk joined #gluster
12:58 hagarth joined #gluster
12:59 RaSTar joined #gluster
13:00 recidive joined #gluster
13:00 bene2 joined #gluster
13:05 theron joined #gluster
13:08 luckyinva joined #gluster
13:13 staceyfriesen joined #gluster
13:18 dusmant joined #gluster
13:19 B21956 joined #gluster
13:21 aravindavk joined #gluster
13:23 simulx joined #gluster
13:27 B21956 joined #gluster
13:27 julim joined #gluster
13:28 tdasilva joined #gluster
13:29 jmarley joined #gluster
13:32 77CAABKZJ joined #gluster
13:32 recidive joined #gluster
13:34 geaaru anoopcs Rafi_kc: now it works. However i think that is must be enhancement on gluster daemon. I fix this with use of hostname on peer instead of ip. But if I use hostname on peer probe on calling peer hostname in input is used but on target host
13:34 geaaru is used ip address
13:35 geaaru and on gluster volume create could be helpful have an option to force binding on 0.0.0.0 instead of hostname to avoid
13:35 geaaru manual changes on volume description file
13:35 hchiramm joined #gluster
13:35 geaaru thank you very much at all for support
13:42 staceyfriesen hey all, so I currently have 2 gluster servers set up with replicated volumes. Everything appears to be running ok. However when I try to write a test file to one of the volumes, I can not see the test file on the other server. I'm looking for some guidance on what I might be doing wrong.
13:46 LebedevRI joined #gluster
13:53 xleo joined #gluster
13:56 elico left #gluster
13:59 deeville joined #gluster
14:01 qdk joined #gluster
14:03 RaSTar joined #gluster
14:05 gluster_performa joined #gluster
14:06 mick271 joined #gluster
14:07 Pupeno joined #gluster
14:08 gluster_performa Hello, I am having some performance issues with Gluster. ATM I am copying 250GB of files to Gluster and it is taking forever (been going for 4+ hours and hasn't finished). Client and hosts are in the same AWS VPC. What tuning is possible with Gluster to make it faster?
14:08 mariusp joined #gluster
14:09 jiffin1 joined #gluster
14:10 juhaj_ joined #gluster
14:11 anoopcs1 joined #gluster
14:13 semiosis_ joined #gluster
14:14 AaronGreen joined #gluster
14:15 wushudoin| joined #gluster
14:15 gluster_performa anybody got any suggestions ?
14:16 NigeyS lots of small files ?
14:16 gluster_performa between 1 and 3 MB
14:16 gluster_performa so I gess yes
14:16 gluster_performa *guess
14:17 semiosis joined #gluster
14:17 NigeyS not sure on tuning options, but gluster is apparently very slow when dealing lots of small files
14:18 gmcwhistler joined #gluster
14:19 elico joined #gluster
14:20 gluster_performa What size do we consider to be small files? < 1MB ?
14:20 NigeyS that i'm not sure about, i'm still new to gluster myself..
14:21 harish joined #gluster
14:21 diegows joined #gluster
14:22 bharata-rao joined #gluster
14:22 Slashman joined #gluster
14:23 gluster_performa Ok thanks anyway
14:23 gluster_performa left #gluster
14:23 coredumb joined #gluster
14:23 portante joined #gluster
14:24 B21956 joined #gluster
14:25 pdrakewe_ joined #gluster
14:25 twx joined #gluster
14:26 staceyfriesen Hey guys, I've just set up gluster on rackspace to handle 2 loadbalanced file upload servers using these instructions: http://blog.gluster.org/category/rackspace/ I think I have some confusion about how to write to the volume I created does anyone have some advice on how to test my setup?
14:27 staceyfriesen After writing a test file on the volume on one of the servers, I do not see it replicated on the other.
14:28 kkeithley1 joined #gluster
14:29 capri joined #gluster
14:31 skippy staceyfriesen: did you create the file directly on the brick?  Or did you mount the brick (NFS or FUSE) and write to the mounted volume?
14:31 gmcwhistler joined #gluster
14:32 R0ok_ joined #gluster
14:33 B21956 joined #gluster
14:33 luckyinva joined #gluster
14:33 17SAA4KOY joined #gluster
14:33 17SAA4J9Y joined #gluster
14:33 Pupeno_ joined #gluster
14:33 mojibake joined #gluster
14:33 anoopcs joined #gluster
14:33 AaronGr joined #gluster
14:33 jiffin joined #gluster
14:33 17SAA3B10 joined #gluster
14:33 dblack joined #gluster
14:33 jiqiren joined #gluster
14:34 jiqiren joined #gluster
14:35 dblack joined #gluster
14:43 staceyfriesen @skippy here is the way things are currently set up... hopefully this is useful.
14:43 staceyfriesen Volume Name: uploads
14:43 staceyfriesen Type: Replicate
14:43 staceyfriesen Volume ID: 8dae2aac-45d8-4fc3-81d7-4aac0580b20b
14:43 staceyfriesen Status: Started
14:43 staceyfriesen Number of Bricks: 1 x 2 = 2
14:43 staceyfriesen Transport-type: tcp
14:43 staceyfriesen Bricks:
14:43 staceyfriesen Brick1: prod-upload-server-01:/glusterfs/brick/uploads
14:43 staceyfriesen Brick2: prod-upload-server-02:/glusterfs/brick/uploads
14:44 staceyfriesen My mount is as follows
14:45 staceyfriesen "/dev/xvdb1       99G   60M   94G   1% /glusterfs/brick"
14:45 plarsen joined #gluster
14:46 staceyfriesen I am trying to write to /glusterfs/brick/uploads/test.txt
14:46 skippy is that the raw brick, or the mounted Gluster volume?
14:47 skippy you want to mount the Gluster volume somewhere and write to that.
14:47 pkoro joined #gluster
14:47 skippy mount -t glusterfs prod-upload-server-01:/uploads /some/where
14:48 skippy then when you write to /some/where, your files should be replicated
14:48 staceyfriesen oh ok, I'll give that a try
14:51 elico joined #gluster
14:51 kkeithley_ never write to the brick (backing) file system.
14:54 deepakcs joined #gluster
14:59 staceyfriesen yeah I read about never writing to the brick... but I dont think I mounted the gluster volume
15:00 staceyfriesen so I was confused on the mount
15:00 staceyfriesen I have a little more set up to do and I'll report back to y'all
15:00 staceyfriesen thank you!
15:02 lalatenduM joined #gluster
15:03 theron joined #gluster
15:04 kkeithley_ most people don't NFS mount their NFS exports locally, and writing to an NFS backing store isn't verboten. But then NFS isn't GlusterFS. ;-)
15:04 kkeithley_ skippy++ for helping
15:04 glusterbot kkeithley_: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
15:04 glusterbot kkeithley_: skippy's karma is now 1
15:04 kkeithley_ glusterbot-- is confused
15:04 glusterbot kkeithley_: glusterbot's karma is now 5
15:05 kkeithley_ ;-)
15:05 theron joined #gluster
15:06 kmai007 joined #gluster
15:06 jobewan joined #gluster
15:07 kkeithley_ JoeJulian: glusterbot's ping regex needs a bit of helping
15:07 glusterbot kkeithley_: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
15:07 kmai007 or ++
15:07 kmai007 kmai007++
15:07 glusterbot kmai007: Error: You're not allowed to adjust your own karma.
15:08 kmai007 hahahah so he fixed that, kkeithley++
15:08 glusterbot kmai007: kkeithley's karma is now 15
15:08 kkeithley_ I think it's borken on any line that ends with ping
15:08 glusterbot kkeithley_: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
15:08 kmai007 helping yelping
15:08 glusterbot kmai007: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
15:08 kkeithley_ slipping
15:09 glusterbot kkeithley_: Please don't naked ping. http://blogs.gnome.org/markmc/2014/02/20/naked-pings/
15:18 dtrainor joined #gluster
15:20 dtrainor joined #gluster
15:22 georgeh_ joined #gluster
15:23 mojibake joined #gluster
15:28 pdrakeweb joined #gluster
15:30 staceyfr_ joined #gluster
15:30 coredump joined #gluster
15:36 bala joined #gluster
15:36 Debolaz joined #gluster
15:36 Debolaz joined #gluster
15:41 aravindavk joined #gluster
15:42 staceyfr_ Hey skippy all good!! thanks again :)
15:42 skippy yay!
15:42 staceyfr_ the instructions I read missed that important piece
15:42 staceyfr_ mounting the gluster volume
15:43 recidive joined #gluster
15:48 deeville is it possible to mount a gluster volume as read-only using the native client?
15:48 PeterA joined #gluster
15:48 skippy sure
15:50 elico joined #gluster
15:50 kmai007 in the fstab you can list options of ro
15:50 kmai007 or you  can manually mount it as such
15:51 kmai007 mount -t glusterfs -o ro <server>:/<vol>  /mntpiont
15:52 deeville kmai007, thanks. I'm using autofs and doesn't seem to be working. Maybe I'm missing something. But glad to know it's possible
15:56 deeville kmai007, it's working. thanks again
15:58 jmarley joined #gluster
15:58 RameshN joined #gluster
16:00 geaaru joined #gluster
16:12 bala joined #gluster
16:14 RameshN joined #gluster
16:14 chirino joined #gluster
16:14 Philambdo joined #gluster
16:42 lmickh joined #gluster
16:58 anoopcs joined #gluster
17:02 bala joined #gluster
17:03 DV__ joined #gluster
17:14 glusterbot New news from newglusterbugs: [Bug 1138385] [DHT:REBALANCE]: Rebalance failures are seen with error message " remote operation failed: File exists" <https://bugzilla.redhat.com/show_bug.cgi?id=1138385>
17:14 LHinson joined #gluster
17:16 cultavix joined #gluster
17:17 zerick joined #gluster
17:23 sputnik13 joined #gluster
17:33 failshell joined #gluster
17:34 failshell on 3.5.2, i have this test volume. i set the following features.read-only: on to it. mounted the volume, im still able to write to it. am i misunderstanding that option?
17:34 skippy did you restart the volume after setting that option?
17:35 failshell nope
17:35 failshell ah here we go
17:35 failshell thanks skippy
17:36 skippy it's not clear which options require a volume restart.
17:39 sputnik13 joined #gluster
17:41 anoopcs joined #gluster
17:42 recidive joined #gluster
17:47 sputnik13 joined #gluster
17:48 swc|666 joined #gluster
17:49 gmcwhistler joined #gluster
17:50 sputnik13 joined #gluster
17:54 _dist joined #gluster
18:00 recidive joined #gluster
18:05 dberry joined #gluster
18:05 dberry joined #gluster
18:06 theron joined #gluster
18:10 ira joined #gluster
18:12 edong23 joined #gluster
18:16 sputnik13 joined #gluster
18:21 P0w3r3d joined #gluster
18:23 NigeyS just a quick question on replication, say fs1 went down, and users continued to upload to fs2, fs1 would resync when it came back online automatically right ?
18:23 _dist yeap
18:24 NigeyS great, thanks _dist
18:24 _dist the self heal daemon would run and within 10 min would start healing, it might take a while to heal completely depending on how much data was written while fst1 was down
18:25 xrandr_work joined #gluster
18:25 xrandr_work hello. How can I reconnect a peer after said peer has been rebooted?
18:25 NigeyS got ya, i was debating wether or not to allow users to sftp to just fs1 or fs2 and use rrdns to bounce them to whatever server, but wondered what would happen if 1 of them went down, how long they'd be out of sync for etc
18:26 _dist make sure you don't write directly to the bricks, you need to write through a fuse or vfs mount
18:26 semiosis xrandr_work: should be automatic
18:26 xrandr_work semiosis: well on one node it is, on the other it is not for some reason
18:26 NigeyS _dist using fuse mounts, i hear writing to bricks is bad!
18:26 semiosis xrandr_work: try restarting glusterd (which is called glusterfs-server on debian/ubuntu distros)
18:26 _dist NigeyS: yeah, if you don't go thorugh the mounts that data will never replicate
18:26 doo joined #gluster
18:27 _dist or worse stuff can happen :)
18:27 NigeyS eek, think ill pass on that then :D
18:27 xrandr_work semiosis: ok
18:35 deeville Hi folks, what does "Staging failed on <bunch of alphanumerics>. Please check log file for details" mean? And which log file is it referring to?
18:35 ThatGraemeGuy joined #gluster
18:36 sspinner left #gluster
18:36 semiosis deeville: what command produced that error?
18:37 deeville semiosis, gluster volume status
18:37 semiosis the log would be /var/log/etc-glusterfs-glusterd.log
18:37 semiosis put it on pastie.org if you want us to take a look
18:40 ThatGraemeGuy joined #gluster
18:41 _Bryan_ joined #gluster
18:41 deeville semiosis, I think this particular node is "Peer rejected", let me fix that first and see if it goes away
18:42 semiosis ,,(peer rejected)
18:42 glusterbot I do not know about 'peer rejected', but I do know about these similar topics: 'peer-rejected'
18:42 semiosis ,,(peer-rejected)
18:42 glusterbot http://www.gluster.org/community/documentation/index.php/Resolving_Peer_Rejected
18:42 deeville semiosis, thanks, that's what i usually follow
18:47 MacWinner joined #gluster
18:47 Philambdo joined #gluster
18:56 xrandr_work semiosis, can it take awhile for the two peers to reconnect? I've done the gluster service restart and it still hasn't reconnected
18:59 JoeJulian shouldn't be more than about 3 seconds.
19:00 mick27 joined #gluster
19:01 xrandr_work JoeJulian: When I do gluster peer status on the server that hosts the gluster volume, it says that the peer (zeus) is connected. When on the zeus server, and do gluster peer status, it says that the peer1 server is disconnected.
19:06 longshot902 joined #gluster
19:07 semiosis xrandr_work: did an IP change?
19:07 semiosis or iptables kick in?
19:07 xrandr_work semiosis: nope, static ips
19:07 JoeJulian or selinux?
19:07 kmai007 are you allowing the correct ports
19:08 xrandr_work selinux should be in permissive mode
19:08 xrandr_work i just disabled both firewalls
19:08 kmai007 xrandr_work: @ports
19:08 kmai007 what does gluster volume status show you ?
19:09 JoeJulian @ports
19:09 glusterbot JoeJulian: glusterd's management port is 24007/tcp (also 24008/tcp if you use rdma). Bricks (glusterfsd) use 49152 & up since 3.4.0 (24009 & up previously). (Deleted volumes do not reset this counter.) Additionally it will listen on 38465-38467/tcp for nfs, also 38468 for NLM since 3.3.0. NFS also depends on rpcbind/portmap on port 111 and 2049 since 3.4.
19:09 xrandr_work and it just magically connected
19:09 semiosis iptables
19:10 xleo joined #gluster
19:11 _dist JoeJulian: is it safe to hot-add replication bricks under running vms in the 3.5.2 08/28/2014 version? Or should I just double storage migrate my VMs to an already built replication volume
19:12 xrandr_work semiosis: ijust reissued the service iptables stop
19:12 xrandr_work then it connected
19:12 semiosis yep, i got that
19:14 xrandr_work Thanks a bunch all :)
19:15 kmai007 xrandr_work++
19:15 glusterbot kmai007: xrandr_work's karma is now 1
19:15 semiosis yw
19:17 xrandr_work left #gluster
19:21 zerick joined #gluster
19:24 Philambdo joined #gluster
19:25 sputnik13 joined #gluster
19:25 longshot902 joined #gluster
19:26 JoeJulian _dist: Not entirely sure on the 3.5 branch. I think there's one bug that might affect that. I would probably storage migrate.
19:28 lalatenduM joined #gluster
19:28 JoeJulian Or wait for 3.5.3. Tentative release is sometime next week.
19:31 _dist JoeJulian: I'll just do the storage migrate and be safe. It is safe to add a brick while the volume is stopped right ? (Different volume for files I'd prefer to do that with)
19:34 rotbeard joined #gluster
19:35 JoeJulian _dist: Yes. It's safe with it not stopped. Just not with open files.
19:36 JoeJulian ... open files that need to stay open long-term.
19:36 doo joined #gluster
19:38 kmai007 i want to say, its my b-day today and i'm not doing any work!
19:39 sickness :)
19:39 JoeJulian Happy birthday!
19:40 sputnik13 joined #gluster
19:40 _dist JoeJulian: Thanks for making that more clear. Also happy birthday! :) not doing work can be a blast sometimes
19:42 sickness happy birthday ;)
19:44 semiosis kmai007: http://goo.gl/P3MnHE
19:44 semiosis wouldn't be a #gluster birthday without ants & cake!
19:44 sputnik13 joined #gluster
19:45 JoeJulian semiosis++
19:45 glusterbot JoeJulian: semiosis's karma is now 2000004
19:45 kmai007 ok break down the ants
19:45 kmai007 why the ants
19:45 kmai007 b/c they can lift 10x their weight?
19:50 semiosis gluster mascot is an ant, idk why
19:50 semiosis but that would make sense
19:50 semiosis wow where'd that karma come from?!
19:51 JoeJulian You're just that popular I guess.
19:51 kmai007 i add it earlier today
19:51 kmai007 hhahaha
19:51 kmai007 i cannot add it to myself i found out today
19:51 JoeJulian It's probably about accurate.
19:52 semiosis haha
19:53 _dist Gluster doens't have enough ant references, only one I think of is the "crawl" for self heal, anyone else play sim ant back in the day? :)
19:53 JoeJulian johnmark: ^^^
20:01 Ramereth joined #gluster
20:02 Pupeno_ joined #gluster
20:02 sputnik13 joined #gluster
20:03 lalatenduM semiosis, JoeJulian _dist johnmark check this ant colony video http://www.youtube.com/watch?v=lFg21x2sj-M
20:04 glusterbot Title: Giant Ant Hill Excavated - YouTube (at www.youtube.com)
20:05 lalatenduM I think ants build something similar to what gluster is trying to solve :)
20:06 semiosis wow!
20:09 _dist yeah, that's amazing
20:13 _dist seems like libgfapi performance in 3.5.2 is much better than 3.4.2-1 for random and sequential r/w
20:13 _dist is that expected?
20:14 lalatenduM _dist, I dont think so , never heard of performance improvement changes in 3.5.2 , ndevos would be the right person to tell this
20:16 _dist yeah, I'm surprised too, maybe it's another factor aside from upgrading gluster, or maybe it's VM only and those xattr problems for replication had a performance impact
20:16 lalatenduM joined #gluster
20:16 mariusp joined #gluster
20:16 JoeJulian seems more likely.
20:16 mariusp joined #gluster
20:18 LHinson1 joined #gluster
20:18 JoeJulian we may never be able to look back and understand the ant. johnmark wrote about it but apparently the rest is lost in the bit bucket: http://blog.gluster.org/2011/10/pictures-of-an-acquisition/
20:18 glusterbot Title: Pictures of an Acquisition | Gluster Community Website (at blog.gluster.org)
20:23 johnmark JoeJulian: heh :)
20:23 johnmark the ant was chosen *way* back in tehearly days of the project
20:23 johnmark for reasons that are... unclear :)
20:24 johnmark you should corner hagarth and ask him :)
20:24 johnmark hagarth: ^^^
20:24 johnmark :)
20:24 johnmark and then VCs got involved withthe company, hired marketdroids, and they did away with the and :)
20:24 johnmark er ant
20:24 johnmark and one of the first thing sthey asked me to do when i started was bring back the ant
20:25 johnmark so we created a new and cute version, and that's what you see now :)
20:25 johnmark it signifies 2 things:
20:25 johnmark 1. community. an army of ants is capable of building something greater than the sum of its parts
20:26 johnmark 2. distributed data. Ants know where to find things and they can travel long distances and still locate food stuffs by the trail that other ants lay
20:26 johnmark so... a bit of a stretch? but whatever, it works for me :)
20:26 kmai007 nice
20:26 kmai007 hi johnmark i thought you moved on to other projects
20:28 kmai007 happy to inform you another glusterfs prod success story <----
20:28 glusterbot kmai007: <--'s karma is now -2
20:28 kmai007 wtf
20:28 kmai007 ++++++++++
20:28 glusterbot kmai007: ++++++++'s karma is now 1
20:39 bene2 joined #gluster
20:40 bene2 I'm running on SSD bricks with multi-threaded epoll patch 3842 on SSD volume, and there's no one hot thread anymore, am getting some smallfile speedups. think you'll like it!  hope we can get that in everglades
20:40 johnmark lol
20:42 bene2 ?
20:44 semiosis bene2: everglades?
20:44 bene2 sorry, I forgot, this is #gluster not #rhs.
20:46 hchiramm joined #gluster
20:50 dtrainor joined #gluster
20:57 recidive joined #gluster
21:08 mick271 joined #gluster
21:28 PeterA joined #gluster
21:30 B21956 joined #gluster
21:50 justyns joined #gluster
22:02 bennyturns joined #gluster
22:06 toordog-work is it possible to create a volume without a dns server and using hostname and /etc/hosts?
22:10 semiosis sure
22:11 _dist toordog-work: I'd actually recommend that, unless you're using dynamic ip, which I wouldn't recommend :)
22:14 JoeJulian I recommend against it. Actual DNS is much easier to manage and maintain.
22:15 JoeJulian but to each their own.
22:16 toordog-work well it is just a lab and I don't feel like setuping a dns server just for that
22:17 toordog-work but my volume create command return me an error and I assumed it is the hostname the issue, but maybe not.
22:17 toordog-work gluster volume create Replicat01 replicat 2 transport tcp glusterfs1:/exports/sdb1 glusterfs2:/exports/sdb1
22:17 toordog-work Wrong brick type: replicat, use <HOSTNAME>:<export-dir-abs-path>
22:18 toordog-work hostname:abs_path is correct, is there anything else i do wrong?
22:26 Pupeno joined #gluster
22:26 semiosis toordog-work: s/replicat 2/replica 2/
22:27 toordog-work awww damn typo
22:28 toordog-work volume create: Replicat01: failed: /exports/sdb1/brick is already part of a volume
22:28 toordog-work i'm unlucky lol
22:29 toordog-work i did a gluster volume delete Replicat01 and Simple01 *my failed attempt* and it says the volume doesn't exist for both
22:29 semiosis delete & re-create the brick directory
22:29 Pupeno joined #gluster
22:29 toordog-work so I don't know what make it think it is part of a volume
22:29 toordog-work ok
22:29 semiosis or just delete it, and gluster will create it when you create the volume
22:29 semiosis it's an xattr on the brick dir
22:29 semiosis path or prefix is already
22:30 semiosis path or prefix of it is already par
22:30 semiosis hmm
22:30 semiosis @path or prefix
22:30 glusterbot semiosis: http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
22:30 semiosis there it is!
22:30 semiosis path or a prefix
22:30 semiosis path or a prefix of it
22:30 glusterbot semiosis: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
22:30 semiosis yes!
22:30 ttk joined #gluster
22:31 toordog-work it worked
22:31 semiosis \o/
22:32 toordog-work you rock! :)
22:33 toordog-work i didn't realize they were a xattr/fattr for an object for gluster
22:34 JoeJulian GlusterFS uses xattrs extensively.
22:35 toordog-work ok
22:35 toordog-work getfattr doesn't return anything on my brick directory
22:35 toordog-work mmm
22:35 semiosis ,,(extended attributes)
22:35 glusterbot (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
22:35 semiosis you need the -m . option or you wont see trusted.*
22:36 JoeJulian trusted, security and one or two others are filtered by default. You also must be root.
22:39 toordog-work ok so -d wouldn't show it?
22:44 nergdron joined #gluster
22:45 nergdron Hey all... looking for some advice. I had a node in a 4 node cluster fail. Got a few volumes with 2x mirroring setup, bricks on each host. Replaced the host, followed the docs to re-join it, and then healed and rebalanced the volumes.
22:46 nergdron all operations claimed they completed, but the existing hosts have like 70-80% disk used, and the replaced on only has 35% used.
22:46 nergdron That's after a full rebalance on all volumes.
22:46 nergdron Any suggestions on how to get it to actually rebalance things?
22:47 nergdron may be worth noting that before the failure, the disk usage was relatively even between all 4 nodes, and at the same level... so it just seems like data is missing from the node that's been replaced.
22:48 recidive joined #gluster
22:50 toordog-work which rebalance algo yo uused?
22:51 nergdron I wasn't aware you could specify different algorithms? The docs and command I've got just say "start/stop/status".
22:51 toordog-work ok
22:52 toordog-work I<m double checking my note, i might be confused with something else, i was thinking of the command with has option full, incremental
22:53 nergdron There's a full option on the heal command, which is what I used after re-adding the failed node.
22:53 nergdron but it only copied maybe 20% of the data I expected back.
22:53 toordog-work sorry i think it is related to self heal
22:53 nergdron yeah
22:53 toordog-work sorry just read your reply after :P
22:54 nergdron no worries.
22:54 nergdron so yeah, I did that, but it seemed to be missing data. so I tried volume rebalances, and that moved a bit more data, but still not as much as I'm expecting.
22:54 toordog-work have you try to rerun the rebalance ?
22:54 toordog-work ok
22:54 nergdron I haven't, I can do that.
22:54 toordog-work do you think you are missing data overall ?
22:54 toordog-work did you have 2 brick on that node?
22:55 toordog-work 2 brick or more
22:55 nergdron no, it the volumes still seem to have all the data they should... just 1 brick per node. but it feels like it may silently think it has 2 copies of the data when it only has one.
22:55 toordog-work wouldn't think so, unless it is an unnotice bug
22:56 toordog-work common mistake is misconfiguration 2 brick on the same node as replica
22:56 toordog-work but if you had only one node, most be good
22:56 toordog-work most = must
22:56 capriciouseducat joined #gluster
22:56 nergdron well, that's what I'm saying. before the node failed, it had way more data on it. after the failure.... it claims it's all fine, but way less data on that node.
22:56 nergdron so it feels like there must be data loss there.
22:56 nergdron but I'm not sure what or how.
22:56 toordog-work your total used space is lower than before?
22:56 nergdron on the node I replaced, yeah. before the failure, they were all about 70-80% used.
22:57 nergdron after the failure, heal, and rebalance, the replaced node is only about 35% used.
22:57 nergdron the rest are the same as before.
22:57 toordog-work but on the glusterfs mount ?
22:57 nergdron no, the mount doesn't show any change, because I'm using 2 replicas.
22:57 nergdron so it stayed the same even when the node was totally down.
22:58 nergdron I'm just concerned it doesn't appear to have properly re-replicated the data from the remaining good nodes.
22:58 toordog-work do you know exactly which brick is replicate on which brick ?
22:58 toordog-work shnould node01 replicate to node02 and node03 to node04
22:59 nergdron each volume is distributed-replicate 2x2, with 4 bricks, one per host.
22:59 toordog-work might be that the replication is still progressing, do you see difference after 30 minutes on the used space?
22:59 nergdron no, definitely not still replicating, there's no more network or disk io traffic.
22:59 nergdron it was very obvious when it was replicating, it maxed out the gige.
23:00 toordog-work ok
23:00 nergdron and it says it's done healing and rebalancing.
23:00 toordog-work do you see a 30% increase on the 2 other nodes ?
23:00 nergdron nope.
23:00 nergdron they're the same.
23:00 toordog-work mmm
23:00 toordog-work i'm out of clue
23:00 nergdron yeah. it's really weird.
23:00 nergdron well, thanks for talking through it with me anyway. I can't seem to figure it out.
23:00 toordog-work but still pretty noob, maybe someone more senior in gluster would have better idea
23:01 toordog-work it is clearly not something obvious
23:01 nergdron yeah, we'll see if anyone else has any comments. it definitely doesn't match my idea of how it should work.
23:01 toordog-work and at least will give some more insight to other here :)
23:01 nergdron yeah
23:05 nergdron left #gluster
23:25 delhage joined #gluster
23:25 delhage joined #gluster
23:34 elico Where can I get the full list of features that the current stable version supports?
23:36 Pupeno_ joined #gluster
23:46 bala joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary