Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2014-10-03

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 cjanbanan joined #gluster
00:05 justinmburrous joined #gluster
00:06 coredump joined #gluster
00:07 dtrainor joined #gluster
00:39 bala joined #gluster
00:44 msmith joined #gluster
00:48 h4rry_ joined #gluster
00:51 cjanbanan joined #gluster
00:55 jobewan joined #gluster
01:30 bala joined #gluster
01:33 rwheeler joined #gluster
01:52 justinmburrous joined #gluster
02:08 harish joined #gluster
02:21 cjanbanan joined #gluster
02:24 justinmburrous joined #gluster
02:44 _pol joined #gluster
02:45 _pol joined #gluster
02:57 justinmburrous joined #gluster
03:10 DV joined #gluster
03:18 soumya_ joined #gluster
03:21 cjanbanan joined #gluster
03:59 hagarth joined #gluster
04:09 justinmburrous joined #gluster
04:10 suliba joined #gluster
04:18 TvL2386 joined #gluster
04:44 _pol joined #gluster
04:51 cjanbanan joined #gluster
05:10 _pol joined #gluster
05:20 haomaiwa_ joined #gluster
05:22 ira joined #gluster
05:25 sputnik13 joined #gluster
05:30 soumya_ joined #gluster
05:35 haomai___ joined #gluster
05:40 haomaiwa_ joined #gluster
05:44 justinmburrous joined #gluster
05:53 zerick joined #gluster
06:11 Philambdo joined #gluster
06:13 tryggvil joined #gluster
06:17 tiglog joined #gluster
06:35 kiwnix joined #gluster
06:36 rgustafs joined #gluster
06:42 ricky-ticky1 joined #gluster
06:46 gildub joined #gluster
06:51 h4rry joined #gluster
06:52 rejy joined #gluster
06:59 Fen2 joined #gluster
07:00 aulait joined #gluster
07:02 cjanbanan joined #gluster
07:06 ctria joined #gluster
07:10 ntt joined #gluster
07:10 _pol joined #gluster
07:11 ntt Hi. Someone can help me with glusterfs and openstack swift integration?
07:13 saurabh joined #gluster
07:14 cjanbanan joined #gluster
07:25 malevolent joined #gluster
07:28 ekuric joined #gluster
07:40 fsimonce joined #gluster
07:44 sputnik13 joined #gluster
07:46 aulait joined #gluster
07:52 xavih tg2: it seems to work quite well. Redundancy can be specified at creation time and can be any number (1 for R5-like, 2 for R6-like, or even bigger)
07:53 xavih tg2: the implementation is more similar to raidz (used in zfs)
07:54 xavih tg2: We are still testing it. There are a lot of things to test and many environments and workloads, but our test resources are limited
07:54 xavih tg2: any help or feedback on that will be greatly appreciated :)
08:12 siel joined #gluster
08:14 mdavidson joined #gluster
08:27 mbukatov joined #gluster
08:31 ws2k3 i'm setting up a new glusterfs cluster should i use ext4 for the bricks or should i use zfs ?
08:33 ndevos xfs the most used and tested fs for gluster
08:34 ws2k3 so your advice would be go with xfs ?
08:37 Fen2 agree, use xfs
08:40 ws2k3 okay will do
08:40 ws2k3 i made a 2 node cluster with both 2 disks i think i go with striped replicated cross
08:41 ws2k3 so stripe over 2 machines and if one machine goes down the cluster will stay up
08:42 ws2k3 question: if one machines goes down and i will write data to the other node when its comes backup it will resync and rejoin right ?
08:54 getup- joined #gluster
09:07 vikumar joined #gluster
09:11 _pol joined #gluster
09:15 glusterbot New news from newglusterbugs: [Bug 1149118] Spurious failure on disperse tests (bad file size on brick) <https://bugzilla.redhat.com/show_bug.cgi?id=1149118>
09:20 ndevos ~stripe | ws2k3
09:20 glusterbot ws2k3: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
09:20 ndevos ws2k3: stripe is likely not what you want, distribute-replicate is probably more suitable
09:25 tryggvil joined #gluster
09:45 LebedevRI joined #gluster
09:47 ws2k3 i have 2 servers with both 2 ssd's my idea was to stripe 2 replicate 2
09:47 ws2k3 but you would say go with distribute replicated
09:48 ws2k3 it are pretty heavy machine ssd have over 80.000 IOPS per disk and the machines have a 2gbps network connection
09:49 ws2k3 i also have to say redundancy is not very important its like that i have a location where i can write files. and another server imidialy reads them and removes them
09:50 gildub joined #gluster
10:02 jmarley joined #gluster
10:02 ndevos ws2k3: hmm, in that case you could use striping, but I doubt you get a performance improvement compared to distribute
10:07 redgoo joined #gluster
10:07 redgoo Hello
10:07 glusterbot redgoo: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
10:09 redgoo I have glusterFS running on 2bricks and one of the bricks has the mount /share which gluster volume point to.. and I have NFS servr installed version 3 to mount /share  and i was wondering if there are any issues with running NFS server on top of GlusterFS
10:09 redgoo Has anyone faced an issue with running NFS server on top of GlusterFS
10:09 redgoo ?
10:10 ws2k3 redgoo why you use nfs server ontop of gluster?
10:10 ws2k3 redgoo glusterfs has a build in NFS server so you can access your glusterfs dataset over NFS
10:11 redgoo my client machine is running a custom O/S we designed in 2004 so the kernel is quite old and we can't get the latest version of NFS client running on it...
10:12 ws2k3 the latest nfs version is nfs v4
10:12 redgoo so in order for me to mount the shares on a client mchine i had to install NFSserver on GlusterFS Brick1
10:12 ws2k3 it does not support nfs v3 ?
10:12 redgoo nop
10:12 ws2k3 i think running a nfs server ontop of glusterfs should not be a issue
10:13 ws2k3 ndevos so you would advice me to still do with distribute replicated ?
10:14 ndevos ws2k3: yes, that is what I personally would use - but you can test if stripe really works better for you (I do not expect it to)
10:14 ws2k3 well i'm pretty new with gluster so i think i just go with your advice then :P
10:14 ndevos :)
10:15 ws2k3 ndevos would it matter if i would use the mountpoint of the disk as gluster brick ?
10:16 ws2k3 glusterfs cause with a warning this was not recommanded but i do not see why that would be an issue
10:17 redgoo sorry was the answer to my question is no known issues with having NFS on top glusterFS
10:18 ws2k3 redgoo gluster is a FUSE so i dont think that is going to be an issue
10:18 redgoo Sweet thanks
10:18 ws2k3 redgoo but keep a few things in mind
10:19 ws2k3 on the server you use as a brick Mount the glusterfs server localy on a mountpoint somewhere and then use That mountpoint as your nfs server
10:19 redgoo you see i've had issues with file stale and errors every morning with this as it looks like glusterFs does logrotate every day
10:19 ws2k3 so you do NFS-> glusterclient -> brick and not NFS -> brick
10:19 redgoo ws2k3 yep that's exactly how i have it setup
10:20 ws2k3 hmm okay then i cannot help you further never done it myself
10:21 redgoo i have it setup like this brick1+brick2 -> glusterFS volume -> glusterFS client mount to /share in brick1 --> NFS server exports /share/file1 /share/file2 --> on the client machine NFS client  --> nfsserver /file1 /file2
10:21 ws2k3 yeah that sshould be fine
10:21 ws2k3 what issues are you running into exacly ?
10:25 Slashman joined #gluster
10:37 getup- joined #gluster
10:41 ntt Hi. in replica 2, one server rebooted. Now gluster tell me that /export/brick1 (on the server that has crashed) is not connected. Someone can help me?
10:52 Pupeno joined #gluster
10:55 wica joined #gluster
10:56 wica Hi, If I loss a brick, what will glusterfs 3.3 do?
10:56 wica Will it block the volume ?
11:00 harish joined #gluster
11:18 diegows joined #gluster
11:19 chirino joined #gluster
11:23 ndevos ws2k3: the best is to use a mountpoint like /bricks/brick-1-for-volume and /bricks/brick-1-for-volume/data for the brick itself
11:24 ndevos if /bricks/brick-1-for-volume/data does not exist when glusterd starts the brick process (failed to mount on reboot?), the / of your server does not fill up
11:27 cjanbanan joined #gluster
11:50 bene2 joined #gluster
11:57 ws2k3 ndevos good idea thanks you
11:58 Fen1 joined #gluster
12:01 Slashman_ joined #gluster
12:05 virusuy joined #gluster
12:05 virusuy joined #gluster
12:08 ThatGraemeGuy joined #gluster
12:08 ThatGraemeGuy joined #gluster
12:09 bene2 joined #gluster
12:09 ricky-ticky joined #gluster
12:13 bala1 joined #gluster
12:50 jiri__ joined #gluster
12:51 jiri__ Hello
12:51 glusterbot jiri__: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:53 jiri__ can someone tell me what will happen, if a disk goes in readonly?
12:53 h4rry_ joined #gluster
13:02 getup- joined #gluster
13:10 ricky-ticky joined #gluster
13:18 ws2k3 ndevos when i use this command : gluster volume create statlogs replica 2 transport tcp 10.1.80.125:/mnt/sda1/data/ 10.1.80.126:/mnt/sda1/data/ 10.1.80.125:/mnt/sdb1/data/ 10.1.80.126:/mnt/sdb1/data/ it says volume create: statlogs: failed: The brick 10.1.80.126:/mnt/sda1/data is is being created in the root partition. It is recommended that you don't use the system's root partition for
13:18 ws2k3 storage backend. Or use 'force' at the end of the command if you want to override this behavior. but.. the disks are mounted on /mnt/sda1 and /mnt/sdb1 and i have put data behind it why does it still say is being created in the root partition?
13:21 theron joined #gluster
13:22 ira joined #gluster
13:22 theron joined #gluster
13:24 ppai joined #gluster
13:26 rwheeler joined #gluster
13:41 Norky ws2k3, what does df /mnt/sda1/data     tell you?
13:45 msmith joined #gluster
13:56 msmith joined #gluster
14:01 elico joined #gluster
14:15 soumya_ joined #gluster
14:15 _pol joined #gluster
14:22 ppai joined #gluster
14:22 h4rry joined #gluster
14:29 jobewan joined #gluster
14:33 ira joined #gluster
14:33 gomikemike what does this error exactly mean? "0-glusterfs: SSL support is NOT enabled"
14:34 gomikemike i created a rrDNS entry and point it at my 2 gluster servers
14:34 gomikemike but its not mounting the volume
14:36 plarsen joined #gluster
14:45 tryggvil joined #gluster
14:48 sprachgenerator joined #gluster
14:52 ws2k3 @Norky i forgot some mounts
14:53 ws2k3 when i have a 2 node glusterfs cluster with just replicated can i force my glusterclient to read from only one server? or does it allways use both?
14:57 jiri__ Found the answer,  when a replica set get lost and replace it with a clean replica set. gluster will only report back, that the file con not be found.
14:57 jiri__ Nice to know.
15:00 ekuric left #gluster
15:05 harish joined #gluster
15:14 mojibake joined #gluster
15:16 lmickh joined #gluster
15:26 cjanbanan joined #gluster
15:32 scuttlemonkey_ joined #gluster
15:36 justinmburrous joined #gluster
15:39 davemc joined #gluster
15:40 jbrooks joined #gluster
15:42 plarsen joined #gluster
15:45 nated left #gluster
15:47 n-st joined #gluster
15:56 cjanbanan joined #gluster
16:01 gomikemike is there a way to change the name of a volume?
16:05 semiosis delete & recreate?
16:11 gomikemike semiosis: errr i knew that way...trying to salvage the data and port
16:12 semiosis deleting the volume doesnt touch the data
16:12 justinmburrous joined #gluster
16:20 nullck joined #gluster
16:33 dtrainor joined #gluster
16:40 cmtime but semiosis if you delete you are saying to delete .glusterfs as well?
16:40 semiosis did i say that?
16:40 semiosis i didnt hear me say that
16:40 cmtime no but is it not implied ?
16:41 cmtime I am asking =P
16:41 semiosis what do you think?
16:42 cmtime Well in the past when you delete a volume you can not recreate it with the .glusterfs still on the brick maybe I am wrong.
16:43 semiosis totally wrong
16:43 semiosis you may need to remove a xattr but i think that's about it
16:43 cmtime a lot of post say both
16:43 semiosis oh
16:43 semiosis links?
16:44 cmtime sec
16:45 cmtime http://gluster.org/pipermail/gluster-users/2012-October/011497.html
16:45 cmtime lol thats a old one
16:45 cmtime but valid
16:46 semiosis hardly
16:46 cmtime Not easy to find current info on that topic in google
16:46 semiosis that is one ML post saying what some random person did trying to solve his problem
16:46 semiosis not instructions
16:47 cmtime no instructions that I can find
16:47 justinmburrous joined #gluster
16:47 semiosis i dont even know why were discussing this.  gomikemike asked the orig question & didnt seem to have any problem understanding
16:48 cmtime Was my own question because I had a problem recent along those lines
16:48 cmtime sorry
16:48 semiosis if you have some issue please clearly state the problem & hopefully someone can help
16:48 semiosis otherwise I have to get back to work
16:50 ws2k3 when i have a 2 node glusterfs cluster with just replicated can i force my glusterclient to read from only one server? or does it allways use both?
16:50 semiosis ws2k3: i think there's a setting for that in ,,(options) with recent glusterfs versions
16:50 glusterbot ws2k3: See config options and their defaults with 'gluster volume set help'; you can see the current value of an option, if it has been modified, with 'gluster volume info'; see also this page about undocumented options: http://goo.gl/mIAe4E
16:56 chirino joined #gluster
17:02 h4rry joined #gluster
17:03 virusuy joined #gluster
17:03 virusuy joined #gluster
17:30 nueces joined #gluster
17:40 justinmburrous joined #gluster
17:45 h4rry_ joined #gluster
17:49 ppai joined #gluster
17:52 cjanbanan joined #gluster
17:54 coredump joined #gluster
17:55 nueces joined #gluster
18:07 sprachgenerator joined #gluster
18:11 vipulnayyar joined #gluster
18:13 justinmburrous joined #gluster
18:30 ThatGraemeGuy joined #gluster
18:36 h4rry joined #gluster
18:38 julim joined #gluster
18:39 htrmeira joined #gluster
18:50 davemc joined #gluster
18:57 justinmburrous joined #gluster
19:00 tom[] joined #gluster
19:07 quique joined #gluster
19:12 tom[] joined #gluster
19:13 quique i have server1, server2, server3, server4, i probe 2-4 from 1, a gluster peer status on server2-4 shows the ip of server and not he dns name, is there a way to get them to store the dns instead of the ip address for server1?
19:14 quique i have server1, server2, server3, server4, i probe 2-4 from 1, a gluster peer status on server2-4 shows the ip of server1 and not the dns name, is there a way to get them to store the dns instead of the ip address for server1?*
19:17 jmarley joined #gluster
19:30 skippy quique: run a probe on each member of the pool
19:31 MacWinner joined #gluster
19:31 skippy quique: http://supercolony.gluster.org/pipermail/gluster-users/2013-December/015365.html
19:31 glusterbot Title: [Gluster-users] Gluster peer probe: why sometimes show host name, sometimes IP address? (at supercolony.gluster.org)
19:32 justinmburrous joined #gluster
19:38 ivok joined #gluster
19:41 jbrooks joined #gluster
19:50 h4rry joined #gluster
19:56 dtrainor joined #gluster
20:00 dtrainor joined #gluster
20:01 theron joined #gluster
20:01 theron joined #gluster
20:13 semiosis ,,(hostnames)
20:13 glusterbot Hostnames can be used instead of IPs for server (peer) addresses. To update an existing peer's address from IP to hostname, just probe it by name from any other peer. When creating a new pool, probe all other servers by name from the first, then probe the first by name from just one of the others.
20:13 semiosis skippy: quique: ^
20:14 justinmburrous joined #gluster
20:15 quique skippy, semiosis thank you
20:22 cjanbanan joined #gluster
20:55 justinmburrous joined #gluster
21:27 coredump joined #gluster
21:34 dtrainor joined #gluster
21:38 davemc joined #gluster
21:44 justinmburrous joined #gluster
22:07 dtrainor joined #gluster
22:14 theron joined #gluster
22:17 theron_ joined #gluster
22:24 haomaiw__ joined #gluster
22:31 Jamoflaw joined #gluster
22:32 Jamoflaw does anyone know a rough release schedule for 3.5.3 or 3.6 yet?
22:44 justinmburrous joined #gluster
23:14 tiglog joined #gluster
23:16 cjanbanan joined #gluster
23:17 justinmb_ joined #gluster
23:29 jbrooks joined #gluster
23:46 cjanbanan joined #gluster
23:53 justinmburrous joined #gluster
23:55 tryggvil joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary