Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-11-14

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:10 MugginsM joined #gluster
00:18 al joined #gluster
00:28 smallbig_ joined #gluster
01:01 _Bryan_ joined #gluster
01:12 smallbig joined #gluster
01:13 topshare joined #gluster
01:17 durz joined #gluster
01:21 durz hi, Can I run mysqld on top of glusterfs?
01:25 mojibake joined #gluster
01:33 overclk joined #gluster
01:36 harish joined #gluster
01:58 kshlm joined #gluster
02:25 ilde joined #gluster
02:25 ilde left #gluster
02:25 ildefonso joined #gluster
02:33 topshare joined #gluster
02:38 mojibake joined #gluster
02:41 overclk joined #gluster
02:52 raghug joined #gluster
02:56 badone joined #gluster
03:01 haomaiwa_ joined #gluster
03:05 topshare joined #gluster
03:06 hagarth joined #gluster
03:10 rafi joined #gluster
03:11 Rafi_kc joined #gluster
03:14 bharata-rao joined #gluster
03:17 hagarth1 joined #gluster
03:38 mojibake joined #gluster
03:39 kshlm joined #gluster
03:44 kanagaraj joined #gluster
03:45 nixpanic joined #gluster
03:45 nixpanic joined #gluster
03:46 atinmu joined #gluster
03:46 Rydekull joined #gluster
03:47 skippy joined #gluster
03:55 overclk joined #gluster
03:57 shubhendu joined #gluster
03:59 meghanam joined #gluster
04:00 meghanam_ joined #gluster
04:05 ppai joined #gluster
04:11 anoopcs joined #gluster
04:12 ndarshan joined #gluster
04:15 glusterbot New news from newglusterbugs: [Bug 1164079] [Tracker] RDMA support in glusterfs <https://bugzilla.redhat.com/show_bug.cgi?id=1164079>
04:23 nbalachandran joined #gluster
04:24 nishanth joined #gluster
04:28 atalur joined #gluster
04:29 overclk joined #gluster
04:31 shubhendu joined #gluster
04:31 overclk joined #gluster
04:35 raghug joined #gluster
04:37 overclk joined #gluster
04:37 sahina joined #gluster
04:39 dusmant joined #gluster
04:40 mojibake joined #gluster
04:57 overclk joined #gluster
05:03 pp joined #gluster
05:05 meghanam joined #gluster
05:15 soumya_ joined #gluster
05:17 kdhananjay joined #gluster
05:17 kdhananjay left #gluster
05:20 atinmu joined #gluster
05:23 cogsu joined #gluster
05:24 deepakcs joined #gluster
05:37 cyberbootje joined #gluster
05:38 Telsin joined #gluster
05:38 mojibake joined #gluster
05:39 Telsin left #gluster
05:40 hagarth joined #gluster
05:44 lalatenduM joined #gluster
05:44 jiffin joined #gluster
05:45 glusterbot New news from newglusterbugs: [Bug 914874] Enhancement suggestions for BitRot hash computation <https://bugzilla.redhat.com/show_bug.cgi?id=914874>
05:49 ramteid joined #gluster
05:52 overclk joined #gluster
05:53 anoopcs joined #gluster
06:10 atinmu joined #gluster
06:13 nshaikh joined #gluster
06:21 shubhendu joined #gluster
06:22 soumya_ joined #gluster
06:28 topshare joined #gluster
06:29 topshare joined #gluster
06:33 kumar joined #gluster
06:38 mojibake joined #gluster
06:40 Telsin joined #gluster
06:46 saurabh joined #gluster
06:55 rgustafs joined #gluster
07:05 Debloper joined #gluster
07:06 Fen1 joined #gluster
07:11 ctria joined #gluster
07:13 hagarth joined #gluster
07:30 ghenry joined #gluster
07:30 ghenry joined #gluster
07:38 mojibake joined #gluster
07:42 elico joined #gluster
07:44 dubey joined #gluster
07:44 dubey Hi
07:44 glusterbot dubey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
07:45 dubey This is my first day to start to understand GlusterFS. I am following : https://www.digitalocean.com/community/tutorials/how-to-create-a-redundant-storage-pool-using-glusterfs-on-ubuntu-servers
07:45 glusterbot Title: How To Create a Redundant Storage Pool Using GlusterFS on Ubuntu Servers | DigitalOcean (at www.digitalocean.com)
07:46 dubey my both gluster servers are running fine and storage volume is created
07:46 glusterbot New news from newglusterbugs: [Bug 1029482] AFR: cannot get volume status when one node down <https://bugzilla.redhat.com/show_bug.cgi?id=1029482>
07:47 dubey what i did is, created some files and directory on one of the gluster server, but it is not replicating on another gluster server
07:47 dubey i haven't installed any client yet
07:47 dubey Am i on right path ?
07:48 dubey Can both the server replicate data without client ?
07:50 atalur joined #gluster
07:51 SOLDIERz_ joined #gluster
07:53 atinmu joined #gluster
07:54 dubey Can gluster server act as client too ?
07:56 d4nku joined #gluster
07:57 hagarth dubey: yes, gluster server can act as a client too. You would need to populate data from the client for replication to happen.
07:58 dubey hagarth: can two servers act as each other's client too ?
07:59 hagarth dubey: yes
08:00 dubey I have create two server gluster0 and gluster1, created volume /data and able to see on both ther servers. Now i am trying to mount using "mount -t glusterfs gluster0.example.com:/data /data". It has been more than 10 min. nothing is happening.
08:00 RameshN joined #gluster
08:01 T0aD joined #gluster
08:02 dubey hagarth : how do i check the status ?
08:02 SOLDIERz_ joined #gluster
08:03 dubey this is the info output : Brick1: gluster0.example.com:/data
08:03 dubey Brick2: gluster1.example.com:/data
08:06 Philambdo joined #gluster
08:11 Fen1 dubey, https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf
08:13 Fen1 dubey, Mount your volume on the client node : mount -t glusterfs server1:/test-volume /mnt/glusterfs
08:14 Fen1 serveur1 = @ip node server
08:14 Fen1 test-volume = volume name
08:15 Fen1 /mnt/glusterfs = where you want to mount (directory must be created before)
08:16 Fen1 Is your volume started ?
08:16 vimal joined #gluster
08:16 fsimonce joined #gluster
08:17 Fen1 On the server node : gluster volume start test-volume
08:17 Fen1 test-volume = volume name
08:19 d4nku joined #gluster
08:22 SOLDIERz_ hey everyone
08:22 Fen1 SOLDIERz_, hi :)
08:24 SOLDIERz_ Yesterday we discovered something really odd with glusterfs. At the moment we are using glusterfs version 3.5.2 and rsyncing our data from the old storage to the gluster cluster
08:25 SOLDIERz_ specially the old storage sucks a little bit but that only influence the write Performance on the Cluster. So yesterday we stop the rsync and started again
08:25 SOLDIERz_ it takes up to 14 Hours at least until the rsync starts syncing again and all files were checked
08:26 SOLDIERz_ we are talking about a 12 node Gluster Cluster and 4TB of data. So nothing special. At the old storage got no real load at that time.
08:27 SOLDIERz_ Is there any clue or explanation why it does take that long time to start the sync again?
08:27 SOLDIERz_ Fen1 hi :-)
08:30 Fen1 SOLDIERz_ by rsync you mean geo-replication ?
08:30 SOLDIERz_ no just syncing from one storage node to the gluster cluster
08:31 SOLDIERz_ so rsync storagenode/data glusterfsvolume/gv0
08:33 SOLDIERz_ for more in detail the load for the glusterd deamons was very high an the gluster cluster got a backend alone for the replication 10G Network
08:33 SOLDIERz_ and also Jumbo Frames enabled
08:33 SOLDIERz_ we are transfering small files there
08:38 mojibake joined #gluster
08:40 rjoseph joined #gluster
08:43 d4nku joined #gluster
08:51 SOLDIERz_ joined #gluster
08:51 hagarth joined #gluster
08:54 d4nku joined #gluster
09:00 liquidat joined #gluster
09:03 Pupeno joined #gluster
09:03 Pupeno joined #gluster
09:05 atalur joined #gluster
09:06 dubey Fen1: Yes, volume is started
09:07 SOLDIERz__ joined #gluster
09:08 topshare joined #gluster
09:10 topshare joined #gluster
09:14 harish joined #gluster
09:14 d4nku joined #gluster
09:21 d4nku joined #gluster
09:23 Pupeno joined #gluster
09:24 dubey I am trying to do this on AWS (public cloud)
09:35 ingard joined #gluster
09:36 Bosse joined #gluster
09:38 mdavidson joined #gluster
09:40 anoopcs joined #gluster
09:40 deniszh joined #gluster
09:52 geaaru joined #gluster
09:54 geaaru hi, is possible change a replicated (replica 2) volume to a distribute replicated volume (strip 2 + replica 2) ? Because if i try to add 2 brick with strip 2 i receive message 'Changing the 'strip count' of the volume is not supported feature. Thanks in advance
09:55 anoopcs joined #gluster
09:56 geaaru and answer is yes , add bricks without strip/replica option . :)
09:57 hagarth joined #gluster
09:57 _shaps_ joined #gluster
09:59 diegows joined #gluster
09:59 soumya joined #gluster
10:04 Fen1 joined #gluster
10:05 cyberbootje joined #gluster
10:10 Lee- joined #gluster
10:15 sahina_ joined #gluster
10:17 ppai joined #gluster
10:19 rjoseph joined #gluster
10:20 nishanth joined #gluster
10:22 bjornar joined #gluster
10:40 jiffin joined #gluster
10:44 harish joined #gluster
10:52 rjoseph joined #gluster
10:53 troublesome soo, i re-created the glusterfs yesterday.. exact same behaviour..
10:53 troublesome usage just keeps going up and up with no added files
10:53 troublesome if i delete som files from a client, no space is freed up
10:57 bharata_ joined #gluster
11:12 ppai joined #gluster
11:12 rgustafs joined #gluster
11:35 lalatenduM joined #gluster
11:41 snowboarder04 joined #gluster
11:47 glusterbot New news from newglusterbugs: [Bug 1164218] glfs_set_volfile_server() method causes segmentation fault when bad arguments are passed. <https://bugzilla.redhat.com/show_bug.cgi?id=1164218> || [Bug 1164227] nfs mount via symlinks cross the volume and comes back again to same will fail <https://bugzilla.redhat.com/show_bug.cgi?id=1164227>
11:47 Pupeno joined #gluster
11:51 SOLDIERz__ joined #gluster
11:52 bene joined #gluster
11:56 meghanam joined #gluster
11:56 shubhendu joined #gluster
11:58 ppai joined #gluster
11:59 calisto joined #gluster
11:59 elico joined #gluster
12:03 saurabh joined #gluster
12:16 raghug joined #gluster
12:16 RameshN joined #gluster
12:23 anoopcs joined #gluster
12:29 lalatenduM joined #gluster
12:33 geaaru hi, is there a way to show total size used/free on a volume not for bricks ? thanks in advance
12:35 geaaru (from gluster cli)
12:36 calisto joined #gluster
12:40 ndevos geaaru: "df" when you have the volume mounted?
12:43 Philambdo joined #gluster
12:47 geaaru ndevos: yes, but from gluster cli is there a way to show volume total size , etc. ?
12:47 glusterbot New news from newglusterbugs: [Bug 1143886] when brick is down, rdma fuse mounting hangs for volumes with tcp,rdma as transport. <https://bugzilla.redhat.com/show_bug.cgi?id=1143886>
12:51 ocellus joined #gluster
13:02 calisto joined #gluster
13:06 Pupeno joined #gluster
13:07 troublesome is it possible to fix this error: Transport endpoint is not connected without unmounting?
13:07 edward1 joined #gluster
13:12 Fen joined #gluster
13:13 ndevos troublesome: it depends on why that happens... I'd say you hit a bug in the fuse-client
13:32 Philambdo joined #gluster
13:35 LebedevRI joined #gluster
13:37 troublesome ndevos it started when the gluster volume went offline
13:37 bene joined #gluster
13:37 troublesome it has come back on all nodes, except a single one
13:42 ndevos troublesome: I think it should reconnect automatically...
13:43 troublesome yea it did on the other nodes
13:43 troublesome but not here, the log is just flooded with th esame message
13:44 ndevos geaaru: no, I do not think so :-/
13:44 geaaru ndevos: ok, thank you very much for support
13:45 ndevos troublesome: hmm, maybe that one client got into a funky state... you could file a bug about that
13:45 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
13:49 troublesome its not possible to restart it? remount without rebooting?
13:50 jiffin checking
13:52 B21956 joined #gluster
13:53 virusuy joined #gluster
13:57 Slashman joined #gluster
13:59 dubey joined #gluster
14:00 dubey Hi
14:00 glusterbot dubey: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:00 meghanam joined #gluster
14:00 dubey Environment = Ubuntu 14.04
14:00 dubey Gluster ver = 3.5.2
14:00 dubey Servers = Gluster0 & Gluster1
14:00 dubey Client = Gluster0 & Gluster1
14:00 dubey Client mount path = /data/deploy
14:00 dubey Server Shared Dir. = /deploy
14:01 dubey Issue : Files created on client at mounted path which is reflecting on shared directory on same vm which is acting as Gluster1 too, But i don't even a signle file on Gluster0 at shared path.
14:01 dubey What could be the reason for not replicating client to server
14:02 Fen dubey, your servers are also client ?
14:02 dubey Yes
14:03 Fen maybe it's the reason
14:03 bennyturns joined #gluster
14:03 dubey As per the document, it says that server can act as client
14:04 Fen your goal is when you create a file in gluster0 for exemple, in /data/deploy. You want to have this file then on gluster0/deploy and gluster1/deploy, is it correct ?
14:06 haomaiwa_ joined #gluster
14:08 calisto joined #gluster
14:08 dubey should i run peer probe on both the servers ?
14:09 aixsyd joined #gluster
14:09 dubey Fen : Correct
14:09 Fen gluster0 and gluster1 are not in the same peer ?
14:10 ackjewt dubey: You should always write and read files through the glusterfs/nfs/smb mount, never directly on the xfs/ext3/whatever mountpoint
14:10 ackjewt If that's the case here
14:11 Fen if you have added gluster1 with gluster0, you don't need to add gluster0 with gluster0
14:12 Fen maybe your bricks are not good, how have you mount them ?
14:12 dubey I was looking at howtoforge site and it says that i should run peer probe gluster0 from and peer probe gluster1 from 0
14:13 Fen not necessary
14:13 dubey ackjewt : didn't get it.
14:14 Fen dubey, do "gluster peer status" on both server
14:14 Fen and check if they see each other
14:15 dubey Fen : this is what i see
14:16 dubey Sorry guys, it is my first day to learn GlusterFS
14:16 hagarth joined #gluster
14:16 Fen dubay, no problem i was like you 2 mounth ago :p
14:18 lpabon joined #gluster
14:19 dubey http://pastebin.com/JsZG5rhZ
14:19 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:19 ndevos troublesome: no, that message normally is a sign that the fuse-client became unusable, you really need to unmount/mount
14:20 ndevos a remount is not available for the fuse client, and I doubt it could be made to work relyable for this kind of error
14:21 Fen dubey your server are in cluster it's ok for that :)
14:21 Fen dubey how have you create your volume now ?
14:22 dubey Fen: I am on EC2 (AWS) and did't attached any EBS volume (I hope it will not matter), and on gluster0 created a new directory /deploy
14:23 dubey using volume create command
14:24 Fen dubey can you write down this command ?
14:24 dubey and confirmed on both the servers g0 & g1, /deploy is present
14:24 dubey Fen: gluster volume create deploy replica 2 transport tcp gluster0.druva.com:/deploy gluster1.druva.com:/deploy force
14:25 dubey Then on G1 i created a new directory at /data/deploy and mounted /deploy
14:25 Fen dubey ok and how have you create your partition for /deploy ?
14:26 Fen on both server
14:26 dubey I created it on /root
14:26 dubey i didn't created any new partition, it is just a new directory at /
14:27 Fen dubey it's why it's not working, and it was the exactly same fault as mine when i started :p
14:28 Fen dubey you need to creat partition to work with glusterfs :)
14:28 dubey Fen: does it require to have separate drive on both the servers ? Like /dev/sdb
14:28 calum_ joined #gluster
14:28 Fen dubey yes :)
14:28 dubey Fen : Oh!
14:28 hagarth dubey: what does your mount command look like?
14:29 tdasilva joined #gluster
14:30 dubey mount -t glusterfs gluster0.example.com:deploy  /data/deploy (on G1)
14:30 Fen dubey read chapter 6.2 : https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/pdf/Administration_Guide/Red_Hat_Storage-3-Administration_Guide-en-US.pdf
14:32 hagarth does mount -t fuse.glusterfs list /data/deploy ?
14:34 dubey us this a command "mount -t fuse.glusterfs list" ?
14:34 bala joined #gluster
14:35 dubey If yes, it shows me the help contents no output
14:36 Fen dubey can you reformulate ?
14:36 dubey Fen: what does this mean ?
14:37 Fen dubey say your question in a different way
14:38 dubey Fen : is "mount -t fuse.glusterfs list" is a command ?
14:38 dubey Fen : is "mount -t fuse.glusterfs list"  a command  or "mount -t fuse.glusterfs list /data/deploy"
14:39 dubey Fen: Is this what you expected ?
14:41 dubey Fen: do i have to attach one-one ebs volumes on each ?
14:41 Fen yes :) but i don't know this command, i just know the official command "mount -t glusterfs server1:/test-volume /mnt/glusterfs"
14:41 Fen dubey i have never worked with a cloud configuration sry
14:42 dubey hagarth has suggested that command.
14:42 Fen dubey it depend maybe of his version
14:42 dubey Fen:  Also can you suggest me on which method is best for mount the shared directory at boot time
14:43 Fen dubey Open the /etc/fstab file in a text editor
14:43 dubey ok
14:44 Fen dubey Add this : "HOSTNAME|IPADDRESS:/VOLNAME /MOUNTDIR glusterfs defaults,_netdev 0 0"
14:44 hagarth dubey: "mount -t fuse.glusterfs" is the command. was curious to see the output of that.
14:45 dubey hagarth: gluster0.druva.com:/deploy on /data/deploy type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
14:45 dubey hagarth: gluster0.example.com:/deploy on /data/deploy type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
14:46 virusuy hi guys
14:46 virusuy i have a strange issue with quotas
14:46 hagarth dubey: you should be mounting from druva.com .. example.com should not get listed
14:46 virusuy when i run "gluster volume quota VOLNAME list"
14:47 dubey hagarth: there is two domain which i am using for testing
14:47 virusuy i get my quotas, but "size" column its different from "du -sh directory" usage
14:47 dubey is it expected output ?
14:47 Fen dubey best way is too write the @IP and not the hostname ;)
14:47 hagarth is example.com resolvable in your setup?
14:47 dubey Yes, it is internal
14:47 hagarth does ping gluster0.example.com work fine?
14:49 lalatenduM joined #gluster
14:51 mrEriksson joined #gluster
14:53 dubey yes
14:53 dubey but they are two separate test env. We can opt any one from this
14:53 dubey lets take example one
14:54 hagarth dubey: ok
14:54 mrEriksson Hey folks! When adding a replicated brick to a volume, is it normal to get strange responses from df and such before the data is fully replicated?
14:54 hagarth virusuy: how different are the two?
14:54 virusuy hagarth: gluster volume quota's output shows size 925GB
14:54 virusuy du -sh's shows 1.5T
14:56 dubey Fen: redhat storage administration guide says for physical servers, do i need to create lvm & raid on public cloud ?
14:57 hagarth virusuy: what version of glusterfs?
14:58 virusuy hagarth: 3.4.0
14:58 virusuy hagarth: @ ubuntu Server
15:00 shubhendu joined #gluster
15:00 hagarth virusuy: I would highly recommend using 3.5.2 or 3.5.3 for quota.. quota has been significantly revamped in 3.5 and later releases.
15:00 virusuy hagarth: yeap, that seems the path i'll follow
15:01 virusuy hagarth: doing a quick search on Google, seems like other people reported those differences through mailinglist , but all those thread are unfinished :-(
15:02 virusuy hagarth: anyway, i'll look a little bit further , but upgrade seems the first step , Thanks for you assistance !
15:02 hagarth virusuy: please let us know how it goes in 3.5. Would love to get quota very stable in 3.5.
15:02 virusuy hagarth: i'll ! :-)
15:07 virusuy hagarth: adding a little more info to the topic
15:08 virusuy hagarth: i've another server with 3.5.2 and quota list are ok, with the correct size . Seems something related with versions
15:09 SOLDIERz__ joined #gluster
15:16 shubhendu joined #gluster
15:19 _dist joined #gluster
15:22 calisto1 joined #gluster
15:24 doo joined #gluster
15:25 rsquared left #gluster
15:29 hagarth virusuy: good to know
15:32 meghanam joined #gluster
15:32 meghanam_ joined #gluster
15:33 jobewan joined #gluster
16:02 rwheeler joined #gluster
16:09 redbeard joined #gluster
16:13 shubhendu joined #gluster
16:16 davemc joined #gluster
16:24 coredump joined #gluster
16:25 lmickh joined #gluster
16:25 coredump joined #gluster
16:28 Fen joined #gluster
16:31 jbrooks joined #gluster
16:41 DV joined #gluster
16:42 Philambdo joined #gluster
16:52 Slashman joined #gluster
16:53 jbrooks joined #gluster
17:00 kumar joined #gluster
17:03 nueces joined #gluster
17:27 _Bryan_ joined #gluster
17:31 doo joined #gluster
17:41 plarsen joined #gluster
17:45 raghug joined #gluster
17:48 daMaestro joined #gluster
17:52 jbrooks joined #gluster
18:07 rafi joined #gluster
18:11 anoopcs joined #gluster
18:17 PeterA joined #gluster
18:21 chirino joined #gluster
18:21 andreask joined #gluster
18:24 quique joined #gluster
18:27 quique i add a brick to a replica and then run gluster volume heal <volume> full that should make it copy all files to the new brick right?
18:35 jackdpeterson joined #gluster
18:49 n-st joined #gluster
18:59 jackdpeterson Question -- when removing bricks ... the gluster volume remove-brick replica2 server1:/export/sdd/folder server2:/export/sdd/folder start   re-allocates the data to other bricks.... and commit actually removes the brick? At what point does a brick become unwritable?
18:59 mator_ joined #gluster
19:06 rjoseph joined #gluster
19:08 lalatenduM joined #gluster
19:22 rshott joined #gluster
19:29 tdasilva joined #gluster
19:35 rshott joined #gluster
19:39 lkoranda joined #gluster
19:54 rwheeler joined #gluster
20:05 rafi1 joined #gluster
20:07 fsimonce joined #gluster
20:19 sputnik13 joined #gluster
20:34 lkoranda joined #gluster
20:36 tdasilva joined #gluster
20:51 sputnik13 joined #gluster
20:52 ildefonso joined #gluster
20:58 ildefonso hi all! quick question: are there good reasons to avoid glusterfs 3.2 (version included in Ubuntu's main repository), or, for that matter, are there really good reasons to use 3.5 instead?
20:59 semiosis ildefonso: it's extremely old.  use the ,,(ppa) instead of ubuntu universe
20:59 glusterbot ildefonso: The official glusterfs packages for Ubuntu are available here: STABLE: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh -- QA: 3.4: http://goo.gl/B2x59y 3.5: http://goo.gl/RJgJvV 3.6: http://goo.gl/ncyln5 -- QEMU with GlusterFS support: http://goo.gl/e8IHnQ (3.4) & http://goo.gl/tIJziO (3.5)
20:59 semiosis i'll be publishing new releases to the PPAs this weekend
21:00 semiosis 3.4.6, 3.5.3, and 3.6.1
21:00 ildefonso thanks!
21:00 semiosis yw
21:00 chirino_m joined #gluster
21:01 semiosis chirino_m: ping?  fusesource.org is down
21:05 Maitre Would there be any hangups to upgrading packages, on existing clusters?
21:05 semiosis depends how big the upgrade is
21:05 Maitre 3.2 -> 3.6?  ;)
21:05 semiosis ahahaha
21:06 semiosis yep.  you'll need some downtime.  see ,,(3.3 upgrade notes) that should probably be enough to jump from 3.2 to 3.6 but not sure if anyone's ever tried it
21:06 glusterbot http://vbellur.wordpress.com/2012/05/31/upgrading-to-glusterfs-3-3/
21:06 semiosis might want to stage a test before doing it on your prod cluster
21:06 1JTAAZP0X joined #gluster
21:08 Maitre But like filesystem data shouldn't ever just be lost, right?
21:08 chirino_m semiosis: dont think we are using that domain anymore..
21:08 semiosis chirino_m: where's the hawtjni reference now?
21:09 semiosis chirino_m: http://http//hawtjni.fusesource.org/ is linked from github, but doesnt work
21:09 chirino_m hum.. I'm gonna have to deploy it to github site.
21:09 semiosis oh well that's a bad url in the link, but the hostname is dead too
21:09 semiosis ok
21:09 semiosis thanks
21:10 semiosis icymi, i have two students from FIU working on the glusterfs-java-filesytem/libgfapi-jni for their senior project.  :D
21:11 semiosis chirino_m: ^
21:17 chirino_m noice!
21:19 semiosis Maitre: it shouldn't be lost.  if it's lost, that's definitely a bug :)  but i doubt it would be lost
21:20 ildefonso semiosis, FIU as in Florida International University?
21:21 semiosis yes
21:26 DV joined #gluster
21:32 Maitre Well whatever.  I guess I'll run this upgrade.
21:33 Maitre I figure worst case scenario is, it wants to full resync master -> slave, right?
21:34 semiosis idk about that
21:34 semiosis there's no master/slave in a glusterfs cluster (except for a geo-replication slave)
21:35 jackdpeterson Hey all, I'm back attempting to resolve a gluster nfs stale file handle issue. I now have my dev-environment in a state where I can reproduce the issue (added a brick, no rebalance yet) and NFS client reports nfs stale file handle
21:35 Maitre Okay well like, worst case then is a split-brain, right?
21:35 Maitre XD
21:37 jackdpeterson I'm curious what next steps are to diagnose the issue. I'm seeing some E's in the gluster log -- "transport.address-family not specified. Could not guess default value from (remote-host:(null) or transport.unix.connect-path:(null)) options". Not esure if its related or if that's another issue.
21:39 _dist jackdpeterson: Just adding a brick while the volume is online does this? Could you verify if the gluster fuse drops as well as NFS?
21:41 jackdpeterson @_dist -- sure, I can attempt to reproduce with another add-brick operation and having fuse mounted. I'm curious though -- are there known issues with the NFSv3 implementation and glusterfs 3.5.1 / ubuntu 12.04x64
21:42 jackdpeterson @_dist mount options so far: # mount -t nfs -o "bg,soft,intr,timeo=5,retrans=5,actimeo=30,retry=5,vers=3" 10.1.8.126:/pod1 /var/www
21:42 _dist jackdpeterson: I went from 3.4.1 directly to 3.5.2 and I've honestly never used NFS directly to a gluster volume. I have used glusterfs-fuse clients to reshare NFS before. You have lots of small writes?
21:43 jackdpeterson @_dist ... a bajillion small reads and some periodic writes (php servers)
21:45 jackdpeterson @_dist unfortunately I tried using the fuse adapter first and performance was too poor for it to be practical for our use case. there was a massive improvement in terms of load speed due to the native caching that nfs did. I'm guessing that has something to do with it... I'm just not sure what the fix -- whether dirty or proper is
21:47 jackdpeterson correction -- its gluster 3.5.2
21:49 _dist NFS is going to be better
21:53 sputnik13 joined #gluster
21:57 Maitre Well, upgraded Ubuntu packages to 3.4 ... thing seems to still work.
21:57 Maitre YOLO!
22:02 _Bryan_ joined #gluster
22:03 semiosis FOMO
22:03 semiosis Maitre: congrats you may be the first YOLO in #gluster :)
22:05 jackdpeterson @_dist --> removing bricks (pending). will let you know after I re-run this test of re-adding bricks with glusterfs-client-tools now installed on the client machine and both methods mounted (NFS + FUSE) in separate dirs.
22:30 jackdpeterson @_dist -- the problem only occurs with NFS mounts and not FUSE.
22:37 snowboarder04 Is it a known issue that a lot of the documentation links are pointing at 404 URLs?
22:38 snowboarder04 i.e. pretty much every link in the top-most overview section here: http://gluster.org/documentation/Getting_started_overview/
22:38 glusterbot Title: Gluster (at gluster.org)
22:39 DanielGluster joined #gluster
22:39 DanielGluster Hi guys, i’m getting “Invalid option allow_other” suddenly on a mount that was working previoulsy - any ideas why that might be?
22:49 coredump joined #gluster
22:49 JoeJulian DanielGluster: There are only three possibilities. In ascending probability: you're remembering wrong, it's still working and you're just now noticing that message, something changed.
22:49 DanielGluster Joe, why would it say invalid option, isn’t “allow_other” an acceptable option??
22:57 DanielGluster I’ve got this in FSTAB
22:57 DanielGluster localhost:tms /working/gluster glusterfs allow_other,defaults 0 0
23:01 DanielGluster ?
23:02 DanielGluster @JoeJulian
23:18 nage joined #gluster
23:26 JoeJulian DanielGluster: I don't know why it would be a valid option unless you're mounting it as non-root.
23:26 JoeJulian (which I think it was a valid option for non-root... I think I even blogged about it once upon a time)
23:27 JoeJulian @lucky joejulian.name mount gluster unprivileged
23:27 glusterbot JoeJulian: http://joejulian.name/blog/mounting-a-glusterfs-volume-as-an-unprivileged-user/
23:28 JoeJulian Ah, nope. That wasn't done as a mount option.
23:29 DanielGluster so I can’t use allow_other ?
23:34 sputnik13 joined #gluster
23:34 David_H_Smith joined #gluster
23:36 David_H_Smith joined #gluster
23:37 JoeJulian Do you need to?
23:37 JoeJulian There's no restriction on who can use that fuse mount.
23:41 JoeJulian Man, I think I've been looking at arch packaging for too long today. I'm thinking the wrong thing.
23:42 JoeJulian Meh, no I'm not.
23:44 JoeJulian allow_other allows users other than the one doing the actual mounting to access the filesystem. If you're mounting from fstab others already have access to it.
23:49 badone joined #gluster
23:49 snowboarder04 I'm using gluster for the first time and just tried setting up a volume which failed saying that the other node wasn't connected (it actually was). Listing volumes shows nothing however when I try to run the 'volume create' command again, I'm told that the brick is already part of a volume...
23:50 plarsen joined #gluster
23:50 JoeJulian @path or prefix
23:50 snowboarder04 doing a 'volume remove-brick' fails saying that the volume name I gave in the original command, cannot be found :/
23:50 glusterbot JoeJulian: http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/
23:50 JoeJulian I wish it wouldn't do that until it's successfully built and started a volume.
23:50 snowboarder04 thanks JoeJulian
23:51 JoeJulian You're welcome.
23:51 DanielGluster Ok, i’ve removed that, but now i’m left with...
23:51 DanielGluster [2014-11-14 23:28:47.117987] W [socket.c:611:__socket_rwv] 0-management: readv on /var/run/a4bfcb16c93b12fdb995d48992c7a1cf.socket failed (Invalid argument)
23:51 DanielGluster which I guess was the real issue
23:51 DanielGluster lots and lots of times in the log file
23:52 JoeJulian DanielGluster: Try restarting glusterd on that server.
23:52 DanielGluster ok
23:52 DanielGluster ahhh
23:52 DanielGluster it hadnt started it appears - which is a boot issue i’ve had in the past and thought id fixed
23:52 DanielGluster Ok - now we’re in action
23:52 DanielGluster apart from it not starting on boot again, bah
23:57 sputnik13 joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary