Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-02-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:03 RicardoSSP joined #gluster
00:03 RicardoSSP joined #gluster
01:02 badone_ joined #gluster
01:22 bala joined #gluster
01:49 jbrooks joined #gluster
02:15 jbrooks joined #gluster
02:33 duyt1001 joined #gluster
02:49 ilbot3 joined #gluster
02:49 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:53 jbrooks joined #gluster
02:55 bharata-rao joined #gluster
02:58 Gill joined #gluster
03:02 jmarley joined #gluster
03:04 anrao joined #gluster
03:07 hagarth joined #gluster
03:13 jbrooks joined #gluster
03:38 meghanam joined #gluster
03:51 Pupeno_ joined #gluster
03:59 atinmu joined #gluster
04:01 itisravi joined #gluster
04:02 anrao joined #gluster
04:04 shubhendu joined #gluster
04:07 MacWinner joined #gluster
04:10 atalur joined #gluster
04:12 meghanam joined #gluster
04:13 nbalacha joined #gluster
04:17 ndarshan joined #gluster
04:18 spandit joined #gluster
04:30 hagarth joined #gluster
04:32 jiffin joined #gluster
04:33 anoopcs joined #gluster
04:34 gem joined #gluster
04:35 kshlm joined #gluster
04:36 kanagaraj joined #gluster
04:44 rafi joined #gluster
04:46 kumar joined #gluster
04:50 maveric_amitc_ joined #gluster
05:00 kdhananjay joined #gluster
05:00 RameshN joined #gluster
05:01 anil joined #gluster
05:01 schandra joined #gluster
05:03 ppai joined #gluster
05:06 meghanam joined #gluster
05:25 Manikandan joined #gluster
05:29 kdhananjay joined #gluster
05:29 itpings hi guys
05:31 overclk joined #gluster
05:32 prasanth_ joined #gluster
05:35 rjoseph joined #gluster
05:36 Pupeno joined #gluster
05:40 badone_ joined #gluster
05:47 deepakcs joined #gluster
05:47 ppai joined #gluster
05:51 lalatenduM joined #gluster
05:51 ramteid joined #gluster
05:52 badone_ joined #gluster
05:54 soumya joined #gluster
05:54 aravindavk joined #gluster
06:10 raghu joined #gluster
06:14 gem joined #gluster
06:15 soumya joined #gluster
06:16 atalur joined #gluster
06:24 hagarth joined #gluster
06:28 ppai joined #gluster
06:31 aravindavk joined #gluster
06:45 bala joined #gluster
06:50 atinmu joined #gluster
06:52 ppai joined #gluster
06:53 SOLDIERz joined #gluster
06:58 dusmant joined #gluster
07:01 bala joined #gluster
07:12 karnan joined #gluster
07:25 jtux joined #gluster
07:27 nshaikh joined #gluster
07:32 atinmu joined #gluster
07:39 [Enrico] joined #gluster
07:41 ntt joined #gluster
07:42 LebedevRI joined #gluster
07:52 kovshenin joined #gluster
07:56 fsimonce joined #gluster
07:56 fsimonce joined #gluster
08:08 coredump joined #gluster
08:19 kdhananjay joined #gluster
08:20 ntt Hi. After an upgrade from 3.4 to 3.6 i have a problem with the "mount point" of the volume. in /etc/fstab i have /dev/sdb1 -> /export/brick1 and i builded the volume with root folder = /export/brick1. In 3.6 i should change the root folder of the volume from /export/brick1 in /export/brick1/gv0. What is the right procedure for this? Should i delete the volume, save the content in another dir, recreate a new volume (with the correct mo
08:24 SOLDIERz_ joined #gluster
08:43 dusmant joined #gluster
08:43 deniszh joined #gluster
08:45 Pupeno joined #gluster
08:50 ndarshan joined #gluster
09:05 liquidat joined #gluster
09:13 arash joined #gluster
09:13 arash Hello
09:13 glusterbot arash: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:14 arash i have this error Transport endpoint is not connected
09:14 arash Version 3.5.3
09:14 hagarth joined #gluster
09:14 arash i searched before both google and mailing and find nothing
09:14 itpings hi there
09:15 Slashman joined #gluster
09:15 itpings joejulian u there ?
09:15 arash this is my command to mount glusterfs
09:16 itpings disable firewll arash
09:16 itpings and try
09:16 soumya_ joined #gluster
09:16 arash mount -t glusterfs node1:/gluster /gluster -o defaults,netdev,backupvolfile-server=node2
09:17 arash in node1 i cant ls the /gluster give me the error  Transport endpoint is not connected
09:17 arash is this about firewall  ? \
09:18 ndevos arash: what is the path you used for your bricks?
09:18 itpings can you probe the peer
09:19 arash in gluster volume status i got this
09:20 arash Brick node1ip:/mnt/bd/gluster
09:20 SOLDIERz_ joined #gluster
09:20 arash node2ip :/mnt/bd/gluster
09:20 Norky joined #gluster
09:20 itpings pls go step by step
09:20 itpings can you probe the peer ?
09:20 arash yes i can probe the peer
09:20 itpings ok
09:21 arash wait if i cant probe the peer
09:21 itpings what does pool list shows
09:21 arash at least in node1 i can check the files
09:21 itpings does it says connected ?
09:23 yoavz joined #gluster
09:23 arash State: Peer in Cluster (Connected)
09:23 arash itpings please tell me if i disconnect the network
09:23 arash between two servers
09:23 arash and peer probe failed
09:24 arash whats happened there
09:24 arash ?
09:25 itpings so if your both peers are connected means connection is good
09:26 arash ok now everything is fine again
09:26 arash but when i disconnect node2
09:26 itpings ok now what is your vol info says
09:26 arash Transport endpoint is not connected shows in node1
09:26 arash why ?
09:27 itpings why would you disconnect node 2 ?
09:27 itpings for replication you need at least two noes
09:27 itpings for replication you need at least two nodes
09:27 arash think if node 2 server lost for whatever reasons
09:28 arash what happened to machines running on node 1 ? \
09:28 itpings this is replication what you talking about its not high availablity
09:28 arash my point is high availablity
09:28 arash because of that im using glusterfs
09:28 itpings i mean data will be available
09:29 itpings but when you bring back the second one they will start working together again
09:29 itpings and data will start replicating
09:29 itpings so that ful fills both requirements
09:30 arash ok im now disconnect node2 so machines freez in running state
09:30 arash ping ok machine is up but it freezed
09:30 itpings wait for some time
09:30 arash ok
09:31 arash in this time would you please tell me how can i have both replication and ha ?
09:34 itpings what is in your fstab
09:34 arash 172.20.8.101:/image on /gluster type fuse.glusterfs (rw,default_permissions,al​low_other,max_read=131072) this is node1
09:35 arash 172.20.8.100:/image on /gluster type fuse.glusterfs (rw,default_permissions,al​low_other,max_read=131072) node2
09:35 arash in node1 i set backupvol 100 and in node2 i set backupvol 101
09:36 zerick joined #gluster
09:37 arash i have 2 test machines virtualized by KVM on node1
09:38 arash so when i disconnected node2 i dont know why machines freezed and not worked just said status is running
09:39 itpings and when you connect back
09:39 itpings does it unfreeze
09:39 arash yes everything works fine
09:40 arash in virsh edit i got this on machines running on node1
09:40 arash <disk type='network' device='disk'>
09:40 arash <driver name='qemu' type='raw' cache='none'/>
09:40 arash <source protocol='gluster' name='image/398/disk.0'>
09:40 arash <host name='172.20.8.100' port='24007'/>
09:40 arash you see its 172.20.8.100 the ip of node2
09:41 arash correct me if im wrong
09:41 arash im using opennebula
09:41 arash and its just replicate
09:41 arash i have no HA in this case
09:42 arash is this true ?
09:42 ndevos arash: by default, the ,,(ping-timeout) is set to 42 seconds, but (scsi) disk I/O in the VM will timeout in 30
09:42 glusterbot arash: The reason for the long (42 second) ping-timeout is because re-establishing fd's and locks can be a very expensive operation. Allowing a longer time to reestablish connections is logical, unless you have servers that frequently die.
09:43 ndevos arash: /sys/block/sda/device/timeout contains that timeout for the disk, not sure where it is on a virtio-block device
09:44 arash yes ndevos its work fine if you not use libgfapi but i think using libgfapi is just replicating and we dont have HA
09:44 ndevos you would use a udev rule to adjust the timeout when the disk gets detected on boot
09:45 arash i have a bad english and i think i explain a little complex
09:45 arash in hypervisor node1 i cant ls /gluster
09:45 ndevos I'm not sure, but maybe qemu/libvirt can set a timeout for disk I/O, I very much doubt the issue is with libgfapi :)
09:46 arash because of disconnecting the node2
09:46 arash i connect the node2 again and everything works fine
09:47 arash so please tell me why ? its not normal when you disconnect backupvol then you cant reach the files
09:47 arash something goes wrong here
09:48 arash mount -t glusterfs node1:/gluster /gluster -o defaults,netdev,backupvolfile-server=node2 this command tell backup volume is node 2
09:48 arash so node2 die everything die
09:49 arash its not ok its wrong
09:49 ndevos backupvolfile-server=node2 is only used during the mount process, after the mount succeeded, all glusterfs clients know the volume layout and will talk to the bricks directly
09:50 arash ok so 1 thing is clear that i dont have HA here
09:51 arash why then ? whats wrong ?
09:51 ndevos how long did you wait after disconnecting node2? how long do you think the fail-over should be?
09:51 arash when i disconnect the node 2 at server room
09:52 arash i think there not should be any problem
09:52 arash so after a minute or two
09:52 arash i realize machines freezes
09:52 arash so in node1 ssh i try to check the gluster and
09:53 arash i run ls command in /gluster so the error shows
09:53 dusmant joined #gluster
09:53 arash Transport endpoint is not connected
09:53 arash then i connect the node2 again
09:54 arash and after some seconds everything works fine
09:54 ndevos how strange, can you ,,(paste) your 'gluster volume info' output?
09:54 glusterbot For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
09:55 arash so clearly shows if node2 goes off the entire machines on server node1 had problem
09:55 arash ok please
09:55 arash wait a minite that i disconnect node 2
09:56 ndevos you can also paste the /var/log/glusterfs/gluster.log somewhere, that is the log for the fuse-mount
09:57 arash ok i disconnect node 2
09:57 arash State: Peer in Cluster (Disconnected)
09:57 ricky-ticky joined #gluster
09:57 arash oot@kvm-hv8:~# cd /gluster
09:57 arash bash: cd: /gluster: Transport endpoint is not connected
09:58 arash problem shows
09:58 arash again
09:58 arash # gluster volume info
09:58 arash
09:59 arash Volume Name: image
09:59 arash Type: Replicate
09:59 arash Volume ID: 9174a4d6-22e4-495b-bd0e-e6ec60fde98f
09:59 arash Status: Started
09:59 arash Number of Bricks: 1 x 2 = 2
09:59 arash Transport-type: tcp
09:59 arash Bricks:
09:59 arash Brick1: 172.20.8.100:/mnt/bd/image
09:59 arash Brick2: 172.20.8.101:/mnt/bd/image
09:59 arash Options Reconfigured:
09:59 arash server.allow-insecure: on
09:59 arash storage.owner-uid: 9869
09:59 arash storage.owner-gid: 9869
09:59 arash performance.quick-read: off
09:59 arash performance.read-ahead: off
09:59 arash performance.io-cache: off
09:59 arash performance.stat-prefetch: on
09:59 ndevos arash: please use one of the ,,(paste) tools
09:59 glusterbot arash: For RPM based distros you can yum install fpaste, for debian, ubuntu, and arch it's pastebinit. Then you can easily pipe command output to [f] paste [binit] and it'll give you a URL.
09:59 arash cluster.eager-lock: enable
09:59 arash network.remote-dio: enable
09:59 arash cluster.quorum-type: auto
09:59 arash cluster.server-quorum-type: server
09:59 arash storage.health-check-interval: 10
10:00 ndevos arash: ah, well, I guess with the quorum you have enabled, you need at least 51% of your storage servers to be available
10:01 itpings ok i have questions guys
10:01 itpings does adding extra hdd to exsisting server needs the same command as adding extra servr for replication ?
10:01 itpings and how to check if the vol size has increased
10:01 schandra joined #gluster
10:01 itpings i just added extra hdd to gv0 vol
10:02 itpings by add-brick gv0 command
10:02 itpings i mounted on diff mount point ofcourse
10:02 itpings then used the rebalance command
10:02 itpings all went well
10:02 itpings now how to see if vol size has increased
10:03 ndevos itpings: call 'df /path/mountpoint'
10:03 arash http://pastebin.com/7YbQpBGL
10:03 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
10:04 arash http://fpaste.org/187423/14243402/
10:04 itpings yeah got it
10:04 itpings lo
10:04 itpings lol
10:05 itpings i was df ing wrong moint points :D
10:06 arash ndevos : you think its about qourom ?
10:06 ndarshan joined #gluster
10:09 itpings why some time data from server1 to server 2 is replicated but not vice vesa ?
10:09 ndevos arash: yes, that is very likely
10:10 arash ok
10:10 arash i disable it now
10:10 itpings guys why the replication is only working one side
10:11 itpings node1 --> node 2 ...ok
10:11 itpings node 2 --> node 1 ... not ok
10:12 ndevos itpings: replication only works if you write contents through the glusterfs-fuse mountpoint (or an other glusterfs client)
10:12 arash how can i set cluster.quorum-type: auto to off ?
10:13 arash type gluster volume set cluster.quorum-type: off ?
10:13 atalur joined #gluster
10:13 * ndevos isnt a quorum expert and doesnt really know
10:15 arash ok i set quorum off
10:16 arash nothing changed
10:16 itpings backup2:/gv0 on /mnt/gluster type fuse.glusterfs
10:16 ndevos arash: quorum is a good thing, it will help with preventing split-brains, disabling it is not recommended
10:16 itpings so the mount point is /mnt/gluster
10:17 itpings but its not replicating
10:17 itpings only from node 1 fine but not working the opposite
10:18 itpings now all working
10:18 itpings strange
10:20 itpings here is one more
10:20 bala joined #gluster
10:20 itpings i can see a file i created on node2 which was not replicated on node1
10:20 itpings now i can see this file on node 2 and want to delete it
10:20 itpings its says no such file
10:20 social joined #gluster
10:20 itpings cannot access file.txt
10:21 SOLDIERz_ joined #gluster
10:21 itpings also is there a way to stop deleting files from node 2 because it automatically deletes files on node1
10:26 arash ok
10:26 arash cluster.quorum-type none and its fine
10:26 arash problem goes away
10:27 arash but http://docs.opennebula.org/4.6/admi​nistration/storage/gluster_ds.html recommends quorum must be on ?
10:29 ndarshan joined #gluster
10:30 SOLDIERz_ joined #gluster
10:32 soumya_ joined #gluster
10:37 badone__ joined #gluster
10:43 Pupeno joined #gluster
10:43 Pupeno joined #gluster
10:44 ppai joined #gluster
11:11 gem joined #gluster
11:18 anrao joined #gluster
11:31 ndarshan joined #gluster
11:36 kkeithley1 joined #gluster
11:38 Manikandan joined #gluster
11:38 anrao joined #gluster
11:39 ntt Hi. After an upgrade from 3.4 to 3.6 i have a problem with the "mount point" of the volume: in /etc/fstab i have /dev/sdb1 -> /export/brick1 and i builded the volume with root folder = /export/brick1. In 3.6 i should change the root folder of the volume from /export/brick1 to /export/brick1/gv0. What is the right procedure for this? Should i delete the volume, save the content in another dir, recreate a new volume (with the correct mo
11:40 mator ntt you are not required to change brick to a folder from root, but it is desired
11:40 mator you still can run with brick sitting on root folder
11:42 mator recommendation to use folder instead of root, is that in case of filesystem crash, you will have lost+found in brick / on the volume
11:42 ntt mator: i know (in fact gluster restarted without problems) but i would like to remain coherent with  this recommendation
11:43 soumya joined #gluster
11:43 mator its up to you
11:44 ntt ok..... my procedure seems to be correct?
11:45 kkeithley_ xfs doesn't have a lost+found. If you followed the other guideline and used xfs.
11:45 ntt yes... i'm using xfs
11:46 ntt an (off topic) question about xfs. What is the recommendation about block size ? size=512?
11:47 mator kkeithley_, since when xfs does not have lost+found? I mean if you have a broken fs, and runs xfs_repair
11:49 mator ntt, mkfs t
11:49 mator xfs i
11:49 mator size=512 f
11:49 mator mkfs -t xfs -i size=512
11:49 ntt ok.... thanks mator
11:50 mator ntt https://rhsummit.files.wordpress.com/2013/07/eng​land_th_0450_rhs_perf_practices-4_neependra.pdf
11:50 ildefonso joined #gluster
11:51 ntt mator: thank you. I really need more documentation about gluster
11:53 mator ntt, you could find another one is useful too https://rhsummit.files.wordpress.com/201​4/04/bengland_h_1100_rhs_performance.pdf
11:56 itisravi joined #gluster
12:00 kkeithley_ oh, does xfs_repair create a lost+found? I guess I've never had to run xfs_repair
12:01 mator kkeithley_, lucky you
12:06 ira joined #gluster
12:08 diegows joined #gluster
12:40 awerner joined #gluster
12:49 rjoseph joined #gluster
12:54 glusterbot News from newglusterbugs: [Bug 1163543] Fix regression test spurious failures <https://bugzilla.redhat.co​m/show_bug.cgi?id=1163543>
12:55 ws2k3 Hello i'm trying to create a volume with gluster but it says /mnt/sdb1 is already part of a volume however gluster volume info tells me there are no volumes present
12:56 kkeithley_ ,,(xattr)
12:56 glusterbot I do not know about 'xattr', but I do know about these similar topics: 'xattr afr format'
12:58 ws2k3 kkeithley so how should i fox that ?
12:58 ws2k3 fix*
12:58 kkeithley_ you've used that fs for a gluster volume previously. There are xattrs on the top-level directory that you need to remove before you can use it again for gluster. This is why we suggest using a subdir, you could just `rm -rf $dir`. If you didn't use a subdir then you need to remake the fs
13:00 ws2k3 i see i now just added force at the end of my gluster volume create is that good to cause now it did create the volume
13:00 ws2k3 this is a testing only setup just wanne do some testing
13:01 maveric_amitc_ joined #gluster
13:04 chirino joined #gluster
13:04 ws2k3 on my new created volume it complains about failed to fetch volume file (key:/mnt/sdb1/data)
13:06 lalatenduM joined #gluster
13:09 bennyturns joined #gluster
13:14 ws2k3 when i have 2 servers replicated i mount the volume do i write to both servers at the same time from my client or how does it works?
13:18 soumya joined #gluster
13:18 anoopcs joined #gluster
13:22 sprachgenerator joined #gluster
13:28 sprachgenerator joined #gluster
13:49 ws2k3 is there a repository from gluster 3.5.2 for centos i'm unable to find it
13:50 ndevos ws2k3: LATEST=3.5.3 - http://download.gluster.org/pub/gl​uster/glusterfs/3.5/LATEST/CentOS/
13:50 mator http://download.gluster.org/pub/gl​uster/glusterfs/3.5/3.5.2/CentOS/
13:50 shubhendu joined #gluster
13:52 mator cd /etc/yum.repos.d && wget http://download.gluster.org/pub/gluster/glus​terfs/3.5/LATEST/CentOS/glusterfs-epel.repo
13:52 ws2k3 ndevos i'm trying to install glusterfs 3.5.2 on xenserver now i found you discussing it here with someone: http://download.gluster.org/pub/gluster/gluster​fs/3.5/3.5.2/EPEL.repo/glusterfs-epel.repo.el5 so i tryed http://download.gluster.org/pub/gluster/gluster​fs/3.5/LATEST/EPEL.repo/glusterfs-epel.repo.el5 and that worked only it install 3.6 is there a .repo file that i can use for 3.5.2?
13:52 maveric_amitc_ joined #gluster
13:53 mator ws2k3, why 3.5.2 and not 3.5.latest ?
13:57 ndevos ws2k3: ah, that repo file does not have the correct version, you should be able to edit the file and correct the url
13:58 ndevos ws2k3: and please file a bug so that someone can correct the .repo.el5 file (and check the others)
13:58 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
13:58 ws2k3 cause in my test setup i still run 3.5.2 just wanted to see if xenserver can work with a glusterfs storage repostiory
14:02 ws2k3 it now complains Not using downloaded repomd.xml because it is older
14:04 mator yum clean all
14:05 ws2k3 glusterfs-fuse-3.5.2-1.el5.x86_64 from glusterfs-epel has depsolving problems
14:05 ws2k3 --> Missing Dependency: glusterfs = 3.5.2-1.el5 is needed by package glusterfs-fuse-3.5.2-1.el5.x86_64 (glusterfs-epel)
14:05 ws2k3 Error: Missing Dependency: glusterfs = 3.5.2-1.el5 is needed by package glusterfs-fuse-3.5.2-1.el5.x86_64 (glusterfs-epel)
14:06 R0ok_ ws2k3: are you on centos 6 ?
14:07 R0ok_ ws2k3: just disable all repos except the glusterfs repo
14:08 R0ok_ ws2k3: yum --disablerepo=* --enablerepo=glusterfs install glusterfs-server
14:11 ws2k3 it still comes up with the same error
14:11 ws2k3 i'm on xenserver 6.5 which is based upon centos 6.5
14:12 ws2k3 ndevos i modified my .repo file this is how it looks like now : http://pastebin.com/Fa0mVgsR
14:12 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
14:14 ws2k3 ndevos http://fpaste.org/187541/24355237/
14:15 lalatenduM joined #gluster
14:16 R0ok_ ws2k3: centos-base repo also has glusterfs packages that is why you need to temporarily disable it so that you can download & install glusterfs packages from glusterfs-epel repo
14:16 R0ok_ ws2k3: clear yum cache => yum clean all
14:17 ws2k3 R0ok_ its not centos its xenserver so i only have one citrix repo and thats it
14:17 R0ok_ ws2k3: ok, so what's the name of that repo ? is it citrix ?
14:17 ws2k3 yes
14:18 ws2k3 i already disabled all repo's except glusterfs-epel and gluster-source-epel
14:18 ws2k3 do i need to disable gluster-source-epel to ?
14:18 R0ok_ ws2k3: no
14:18 ws2k3 when i now do yum install glusterfs-client it given me the dependecy issues
14:19 ws2k3 but i already installed glusterfs 3.6.2 and then i removed it again maby that is causing issues
14:19 R0ok_ ws2k3: please provide the output on pastebin
14:19 mator rpm -qa | grep gluster
14:20 ws2k3 http://fpaste.org/187545/35560414/
14:20 ws2k3 http://fpaste.org/187546/55643142/
14:21 mator so you have part installed from gluster 3.6.x and want to install some part of 3.5.2 ?
14:21 mator either use 3.6.x or 3.5.x
14:21 ws2k3 no i made the mistake to install 3.6.x i wanne fully remove 3.6.x and only use 3.5.2
14:22 R0ok_ ws2k3: just yum erase those 3.6.x packages
14:22 mator in your case remove all left 3.6.x package (rpm -e or yum remove) and install 3.5.x
14:22 ws2k3 i only used to work with debian so i'm used to apt-get
14:23 ws2k3 sweet thank you R0ok_
14:23 R0ok_ ws2k3: aight dude, you got 3.5.x ?
14:24 glusterbot News from newglusterbugs: [Bug 1194306] [AFR-V2] - Do not count files which did not need index heal in the first place as successfully healed <https://bugzilla.redhat.co​m/show_bug.cgi?id=1194306>
14:25 ws2k3 R0ok_ yes only not i have another issues is there a PPA for ubuntu for glusterfs 3.5.2-1 ?
14:25 ws2k3 cause my gluster test setup still have 3.5.2
14:27 R0ok_ ws2k3: i've never used gluster on ubuntu but you can check on gluster's website for apt repo
14:28 ws2k3 i used ppa and not the apt repo
14:33 tdasilva joined #gluster
14:33 social joined #gluster
14:34 theron joined #gluster
14:36 ws2k3 https://launchpad.net/~semiosis/+a​rchive/ubuntu/ubuntu-glusterfs-3.5 ihave this version but is 3.5.2-ubuntu1~precise1 is this 3.5.2 or 3.5.2-1?
14:36 georgeh-LT2 joined #gluster
14:37 shaunm joined #gluster
14:37 theron joined #gluster
14:37 duyt1001 joined #gluster
14:38 theron joined #gluster
14:44 bala joined #gluster
14:50 deepakcs joined #gluster
14:55 shaunm joined #gluster
14:55 wkf joined #gluster
15:07 RameshN joined #gluster
15:16 nbalacha joined #gluster
15:20 wushudoin joined #gluster
15:21 spot joined #gluster
15:22 soumya|afk joined #gluster
15:23 jbrooks joined #gluster
15:25 chirino joined #gluster
15:25 spot joined #gluster
15:27 jobewan joined #gluster
15:36 cfeller joined #gluster
15:37 social joined #gluster
15:40 _Bryan_ joined #gluster
15:41 squizzi joined #gluster
15:48 dgandhi joined #gluster
15:48 shubhendu joined #gluster
15:52 kshlm joined #gluster
15:55 kshlm joined #gluster
15:58 kshlm spot, ping!
15:58 kshlm Are you online?
15:59 chirino joined #gluster
16:01 spot kshlm: barely. the hotel wifi is bad here.
16:02 kshlm joined #gluster
16:03 spot kshlm: hi
16:03 spot kshlm: i seem to be able to stay on the wifi better without the red hat vpn enabled.
16:04 kshlm I'm in a similar situation at home currently. Bad network.
16:04 bennyturns joined #gluster
16:05 kshlm I submitted the initial page, and I'm now face a questionnaire.
16:05 T3 joined #gluster
16:05 kshlm I hoping you can help me with it.
16:05 spot okay
16:06 kshlm Cool. I'll dump it onto an etherpad someplace, and we collaborate there.
16:07 spot okay.
16:11 kshlm spot, https://public.pad.fsfe.org/p/​Gluster_GSOC2015_Questionairre
16:11 spot kshlm: do you have the stats from 2014?
16:12 kshlm 1 project, 1 success.
16:12 kshlm We had done it under the Fedora Project though.
16:12 Gill joined #gluster
16:12 spot okay, so did you check the veteran box or not?
16:13 kshlm I was in two minds, but I checked it.
16:13 kshlm We have participated, but not as an organization.
16:16 PaulCuzner joined #gluster
16:16 spot kshlm: do we have a list of interested mentors? :)
16:17 T0aD joined #gluster
16:17 kshlm I know several who are interested, but haven't yet officially put down their names.
16:18 kshlm Based on http://www.gluster.org/community/documentati​on/index.php/Projects#Projects_with_mentors we have at least 3.
16:18 CyrilPeponnet guys, a geo-rep stuck for a long time with this kind of messages in slave log  0-glusterfs-fuse: 11682: /.gfid/00610c81-49d6-4238-af7b-57e6985bfe49 => -1 (Operation not permitted)
16:18 CyrilPeponnet any hints ?
16:23 kshlm spot, You're awesome. I've spent days thinking over the answers, and you just did it. :)
16:23 kshlm spot++
16:23 glusterbot kshlm: spot's karma is now 1
16:24 spot kshlm: thanks. :)
16:25 spot kshlm: read over those answers and let me know if you think any of them need any changes.
16:25 kshlm I am reading them.
16:27 kshlm spot, do you think it'll be good to mention the actual project we completed last time (for the previous involvement question)
16:27 theron joined #gluster
16:29 spot kshlm: yes, absolutely
16:29 plarsen joined #gluster
16:31 shubhendu joined #gluster
16:32 scuttle|afk joined #gluster
16:33 squizzi joined #gluster
16:34 kshlm spot, your answers are good. Can you give your opinion on the two answers I wrote?
16:34 spot kshlm: they looks great.
16:34 spot kshlm: go ahead and submit!
16:34 * spot is going to go find breakfast
16:34 kshlm Cool!
16:35 kshlm Thanks.
16:35 kshlm and bon_appétit
16:35 spot thank you!
16:49 kovsheni_ joined #gluster
16:51 kovshenin joined #gluster
16:59 CyrilPeponnet any geo-repo users here ?
17:07 chirino joined #gluster
17:09 gem joined #gluster
17:13 PeterA joined #gluster
17:17 calisto joined #gluster
17:20 See-9 joined #gluster
17:23 See-9 left #gluster
17:24 HighJynx joined #gluster
17:25 HighJinks joined #gluster
17:25 glusterbot News from newglusterbugs: [Bug 1194380] "mount error(6): No such device or address " when mounting gluster volume as samba-share <https://bugzilla.redhat.co​m/show_bug.cgi?id=1194380>
17:25 T3 joined #gluster
17:25 chirino joined #gluster
17:25 HighJinks test
17:26 PaulCuzner joined #gluster
17:32 deniszh joined #gluster
17:37 RameshN joined #gluster
17:38 alefauch joined #gluster
17:38 alefauch hi
17:38 glusterbot alefauch: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
17:40 alefauch I'm running glusterfs server  3.4.2-1~wheezy1 and I found out that if I send 256 '\O' (null byte) to the control port (49152) the server seems to have a sort of a memory leak and end up beeing killed by the system.
17:41 alefauch Anyone can reproduce this ? As it seems to me this is a trivial DOS case ...
17:46 ndevos alefauch: bug 1146470 handles that for 3.4, I'm not sure if it has been included in a 3.4 release yet <- kkeithley ?
17:46 glusterbot Bug https://bugzilla.redhat.com:​443/show_bug.cgi?id=1146470 low, unspecified, ---, bugs, MODIFIED , The memories are exhausted quickly when handle the message which has multi fragments in a single record
17:47 the-me alefauch: dont send it, then nothing will happen :D
17:48 PeterA1 joined #gluster
17:50 jmarley joined #gluster
17:57 ttkg joined #gluster
18:01 theron joined #gluster
18:06 Rapture joined #gluster
18:06 TinMar joined #gluster
18:08 TinMar Hi, I need help.
18:12 TinMar I have VMs running on a Proxmox cluster and stored on GlusterFS. I added a replicate brick and lunch "volume heal gfs-volname full" and now my gluster servers are overloaded and my VMs is crashing.
18:12 TinMar How can I stop this!
18:12 plarsen joined #gluster
18:12 chirino joined #gluster
18:14 MacWinner joined #gluster
18:28 virusuy joined #gluster
18:28 virusuy joined #gluster
18:32 alan^ joined #gluster
18:33 diegows joined #gluster
18:33 alan^ Hi guys, I'm having an issue with some undetachable peers. I had some probed peers and decided to reimage a node (that was only peered, not part of the group) and now the UUID changed so I'm getting "Connected (Peer Rejected)"
18:33 alan^ *not part of the volume
18:34 alan^ but it doesn't work when I try to detach said peer
18:34 alan^ I tried while it's online and while it's offline, and it won't
18:34 alan^ Is there some way I can force the UUID on the newly imaged machine?
18:44 theron joined #gluster
18:45 theron joined #gluster
18:46 edong23 joined #gluster
18:49 ghenry joined #gluster
19:00 lalatenduM joined #gluster
19:17 bennyturns alan^, did you try force?
19:22 alan^ I hadn't and I did now and it worked
19:27 HighJinks trying to set up gluster to replicate two content between two webservers, no NAS involved. is there a better way to do it than mounting each other’s volumes as /var/www/?
19:27 HighJinks and anyone have experience with the performance in for a webserver usage case? currently using unison…can’t imagine it’s worse
19:40 tdasilva joined #gluster
19:45 SOLDIERz_ joined #gluster
19:48 bennyturns HighJinks, hi yas!
19:48 HighJinks hello!
19:48 glusterbot HighJinks: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
19:49 bennyturns HighJinks, just mount the volume in /var/www/html on both servers
19:49 bennyturns I normally:
19:49 bennyturns copy my data off both servers to the gluster volume
19:50 bennyturns rm -rf to clean out the html dir
19:50 T0aD joined #gluster
19:50 bennyturns them mount the volume in html that I move the data to
19:51 HighJinks yeah, that’s what i was planning
19:51 bennyturns yep it should work well
19:51 HighJinks do you cross-mount? as in, on server 1, mount server2’s brick, and vice versa?
19:51 HighJinks is it even possible tomount server 1’s brick to /var/www?
19:52 bennyturns HighJinks, you want the same content on both webservers right?
19:52 HighJinks yep yep
19:52 HighJinks replicating
19:52 bennyturns I would do gluster v create replica 2 server1:/mybrick server2:/mybrick
19:53 bennyturns then mount -g glusterfs localhost:/myvol
19:53 bennyturns -t
19:53 semiosis you should consider enabling quorum, maybe even using replica 3
19:54 semiosis especially if writes could come from any client
19:54 HighJinks yep, definitely will have writes from both clients
19:55 bennyturns HighJinks, also you may want to have seperate interfaces for gluster / web traffic
19:56 ws2k3 semiosis good you see you here, i have a question specialy for you  3.5.2-ubuntu1~precise1  is that glusterfs 3.5.2-1 ?
19:56 ws2k3 semiosis (from you ubuntu ppa page) https://launchpad.net/~semiosis/+a​rchive/ubuntu/ubuntu-glusterfs-3.5
19:56 HighJinks bennyturns: what do you mean by interface?
19:56 semiosis ws2k3: use the new ,,(ppa)
19:56 glusterbot ws2k3: The official glusterfs packages for Ubuntu are available here: 3.4: http://goo.gl/M9CXF8 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
19:57 ws2k3 semiosis thank you
19:57 semiosis yw
19:57 bennyturns HighJinks, NICs.  with replication you will be using writting to both bricks at the same time, if you need to serve content as well it could eat your bandwidth
19:57 bennyturns if thats an option
19:57 HighJinks bennyturns: ah i see, that’s what i figured you meant. yeah, they would definitely be served on different interfaces
19:58 HighJinks bennyturns: eth1 goes out to the net, eth0 is local
19:58 ws2k3 dont know if its true but is it ineeficient to have a replicated volume with 15 or 20 bricks?
19:58 HighJinks bennyturns: thanks for the help, i appreciate it.
19:58 bennyturns np! anytime!
20:02 HighJinks bennyturns: when i mount via localhost (or by name) whenever i try to make a directory, i get this error: mkdir: cannot create directory `testdir2': Remote I/O error
20:02 HighJinks bennyturns: it still makes the dir and replicates it fine, but the error persists
20:03 HighJinks is that something to be wary of? i can’t find any doucmentation on it
20:03 bennyturns doesnt sound right
20:03 bennyturns did you create fresh dirs for the bricks?
20:03 HighJinks yep
20:04 HighJinks if i mount server2:/vol on server1, i don’t get the error. it’s only via localhost
20:05 bennyturns lemme try it real quick, it shouldnt be a problem.  I mount to myself to test all the time
20:07 PeterA joined #gluster
20:07 ws2k3 dont know if its true but is it ineeficient to have a replicated volume with 15 or 20 bricks?
20:10 HighJinks bennyturns: i only get it on one server as well.
20:11 bennyturns ws2k3, nope with 3 way replica we are looking more at lots of disks / JOBD configs
20:11 bennyturns ws2k3, with some of the new features it looks like lots of bricks will become more prevalent
20:12 jackdpeterson2 joined #gluster
20:12 bennyturns I am still doing most things on RAID 6 and RAID 10
20:13 bennyturns but I am seeing lots of testing on new features where JBODs are getting used.  Also with SSD RAID is not optimal
20:13 bennyturns you are thinking like a 10x2 volume with 2 nodes and 20 disks on each node?
20:14 ws2k3 bennyturns sorry english is not my main language did not fully understand you
20:14 bennyturns (or somethign like that)
20:14 ws2k3 no i have like 22 servers and i was considering to make one replica volume with on 22 servers
20:14 bennyturns yeah thats fine
20:14 ws2k3 i heard that at eatch ls all the servers are contected
20:15 bennyturns I thought you meant disks wise, not server wise.
20:15 ws2k3 does that not do : the more bricks you have the slower it will become ?
20:15 bennyturns ws2k3, no gluster scales linearly
20:16 bennyturns ws2k3, more servers the faster it should be.  I don't have test results handy with ls, mkdir I have some
20:17 PaulCuzner joined #gluster
20:17 bennyturns I commonly test configs from 2 -> 16 nodes to demonstrate scalability
20:18 Pupeno_ joined #gluster
20:20 ws2k3 bennyturns ah okay and are you aware that i plan a replicated volume? so not distribute replicated?
20:20 coredump joined #gluster
20:21 bennyturns ws2k3, hmm actually looking at my test numbers I am worried
20:21 bene2 joined #gluster
20:22 elico joined #gluster
20:22 JoeJulian 22 replicas is a *really* bad idea.
20:22 JoeJulian And a huge waste of money.
20:22 rotbeard joined #gluster
20:22 bennyturns JoeJulian, I thought we meant 22 briks 2 replica
20:22 ws2k3 JoeJulian dont you have a blog regarding gluster?
20:22 elico left #gluster
20:22 JoeJulian I do
20:22 ws2k3 JoeJulian i think i readed it on your blog that 22 replica's is a bad idea
20:23 bennyturns ws2k3, you don't mean 2 replica with 22 nodes right?
20:23 JoeJulian @lucky the dos and donts of gluster replication
20:23 glusterbot JoeJulian: http://joejulian.name/blog/glust​erfs-replication-dos-and-donts/
20:23 ws2k3 yes i was about to copy that
20:24 ws2k3 so basicly in a replicated volume the less replica's you have the faster it will be right ?
20:24 ws2k3 so the ideal setup would be 2 bricks
20:24 ws2k3 if i understand correct(in a replicated volume)
20:25 bennyturns ws2k3, gluster writes to each replica at the same time so figure bandwidth / # replicas
20:25 bennyturns I think of it like:
20:25 JoeJulian Add to that one lookup per replica to ensure that none are out of date...
20:26 bennyturns throughput = ( NIC theoretical limit / number replicas ) - 20% overhead
20:27 jackdpeterson2 Hey all, have a question regarding getting a replica2 node bade into service. We had to pull it out due to a large number of split-brains that was causing the larger application to suffer. We have since corrected the race-condition scenario leading to the net split. I would like to bring that node back into service but without any risk of split brains (would like to clean-mirror the current, in-service volumes). What's process to do that?
20:27 bennyturns 1250 MB / sec for 10G, with replica 2 you get ~500 MB / sec
20:27 elico joined #gluster
20:28 TinMar left #gluster
20:28 JoeJulian ws2k3: And for what purpose would you have 22 replicas? What kind of SLA will require 10 nines?
20:29 JoeJulian jackdpeterson2: either an rm -rf, or format the brick and re-create the volume-id.
20:31 jackdpeterson2 so an rm -rf on the /mnt/brick/example01 for the disconnected node. Then volume sync?
20:32 bennyturns JoeJulian, I hope that guy doesnt do replica 22 :(  I didn't realize that is what he meant :P  I never think of anything past replica 2/3 so it didn't even cross my mind that someone would do that
20:32 JoeJulian jackdpeterson2: Would be a volume heal full, but yeah.
20:33 bennyturns looking at my notes mkdir / rmdir actually get worse as you scale out
20:34 ws2k3 joined #gluster
20:34 JoeJulian bennyturns: yeah, I just sat down and saw, "i plan a replicated volume? so not distribute replicated?" and boggled.
20:34 JoeJulian Yes, directories are created on every brick.
20:34 bennyturns yepo
20:34 ws2k3 i am back lost my connection for a second there
20:34 JoeJulian It's just files that get distributed.
20:35 bennyturns hopefully we can do a linkfile or soemthing
20:35 bennyturns I have seen a proposal or something for it
20:35 theron joined #gluster
20:35 ws2k3 i remember i had to run a special script to see the tree of the data
20:35 ws2k3 to bad gluster does not have a web interface anymore
20:36 jackdpeterson2 @JoeJulian -- regarding the 'recreate the volume-id' part ... is that in say /var/lib/glusterd/vols or...?
20:36 JoeJulian @lucky replace glusterfs brick 3.3
20:36 glusterbot JoeJulian: http://www.gluster.org/pipermail/glu​ster-users/2012-October/011502.html
20:37 JoeJulian nope
20:37 JoeJulian Ah, close...
20:37 JoeJulian http://joejulian.name/blog/repl​acing-a-brick-on-glusterfs-340/
20:37 JoeJulian Same is true for all subsequent versions, of course.
20:58 badone__ joined #gluster
21:00 PaulCuzner joined #gluster
21:17 gildub joined #gluster
21:17 awerner joined #gluster
21:23 DV joined #gluster
21:24 Pupeno joined #gluster
21:24 Pupeno joined #gluster
21:38 johnbot I think this is most definitely an ubuntu 14.04/aws problem but here goes. After upgraded all the packages on ubuntu 14.04 last week and updating to gluster 3.6.2 I had multiple issues were a 1TB partitioned XFS brick in AWS EBS would become corrupted causing it to loose partition maps. On the first occurance I was able to run fdisk and recreate the partition without loosing data but not so on during the
21:38 johnbot second incident which forced me to recover a 2 day old snapshot. If any ones has exeperienced similar problems after updating to the latest ubuntu 14.04 or gluster version I'd be interested in hearing about it. Also want to know if there's any inherent problems with setting up gluster bricks without partition tables.
21:38 johnbot I should add that these were two seperate bricks/ebs volumes that ended up having corrupted partitions
21:39 JoeJulian Wow, that doesn't sound good at all.
21:39 johnbot .... and the second occured after running a brick-remove on an unrelated volume....
21:39 JoeJulian Are you using a block device volume?
21:41 JoeJulian Not positive with a BD volume, though I'm sure it does everything through lvm so I don't think it could do that, nothing with any other volume type *could* overwrite the partition table.
21:41 JoeJulian Sounds to me like amazon has a storage bug.
21:41 johnbot JoeJulian: Hi Joe, each brick was a standard magnetic EBS volume with a single parition on each.
21:42 JoeJulian When I use the word volume, in #gluster, you can be confident I'm referring to gluster.
21:43 johnbot JoeJulian: yeah not sure yet what the root cause is but I didn't notice this behavior until last week after upgraded a ton of ubuntu 14.04 packages on each of my two gluster servers and updating gluster itself from 3.5.0 to 3.6.2
21:44 JoeJulian Since you have no control to how Amazon schedules updates, I wouldn't rule out coincidental timing.
21:44 johnbot I'm sure gluster is not at fault obviously but that leaves ubuntu14.04 or aws so thought I'd bring it up in the event someone else had the same issues
21:44 h4rry joined #gluster
21:44 johnbot JoeJulian: true, I've been bitten before by aws upder-the-hood updates
21:44 johnbot under
21:56 Pupeno_ joined #gluster
22:04 srsc joined #gluster
22:21 diegows joined #gluster
22:21 srsc ok, need some advice. using gluster 3.4.1, i have a distributed replicate (2x2) volume on debian 7.3 using tcp (ipoib). noticed that some files were missing on mounts. gluster peer status shows that peers 1 and 3 are connected with each other but not 2 or 4. likewise for 2 and 4 (2 and 4 are connected, but not to 1 and 3).
22:23 srsc logs don't seem to indicate what's going on. lots of "0-management: readv failed (No data available)" in the glusterd log, and several "glusterfsd: page allocation failure" traces in dmesg.
22:27 badone__ joined #gluster
22:28 plarsen joined #gluster
22:30 srsc i've tested connectivity between all the servers using ping and ibping, and that's all fine
22:32 malevolent joined #gluster
22:33 Pupeno joined #gluster
22:33 srsc also lots of "[server-resolve.c:419:resolve_anonfd_simple] 0-server: inode for the gfid ([removed]) is not found. anonymous fd creation failed" in the brick logs
22:33 Pupeno joined #gluster
22:46 h4rry joined #gluster
23:09 T3 joined #gluster
23:14 PaulCuzner joined #gluster
23:16 srsc peers are reconnected after rebooting all gluster servers, appears to have been an infiniband connectivity issue
23:29 srsc left #gluster
23:42 jackdpeterson2 Is there a safe way to add in a failed node without client mounts losing the mount (assuming the new brick to be added) is replacing an old brick [ e.g., one of the gluster servers in a replica 2 gets hosed]
23:42 jackdpeterson2 *clients losing the mount w/ permission denied on a full heal
23:42 jackdpeterson2 or otherwise
23:43 JoeJulian Using a current version?
23:43 jackdpeterson2 @JoeJulian - 3.6.2 Centos 6.x
23:43 jackdpeterson2 same version, except ubuntu for clients
23:43 JoeJulian Shouldn't get EPERM due to healing...
23:45 JoeJulian A client should just hang until a chunk is healed if you exceed cluster.background-self-heal-count (defaults to 16). That's per-client.
23:48 jackdpeterson2 @JoeJulian -- so in out scenario, we have a ton of small files, I'm concerned that a heal (Copying from Good-Node to blank node) would take eons to complete and during that window things would be out client-side. Is that an unrealistic scenario.
23:50 JoeJulian Correct. Any file for which a lookup is done will start healing in the background while the file is accessed normally. Small files will, of course, heal quickly so it may not even be noticed.
23:51 jackdpeterson2 okay, and write operations on say a directory that's being healed?
23:51 JoeJulian If the client exceeds 16 background heals, lookups will happen in the foreground until a background slot is empty.
23:52 JoeJulian During all this, the self-heal daemon will be chugging away all on its own to heal files.
23:52 jackdpeterson2 so would a client literally be one of the fuse-mounted clients?
23:53 JoeJulian writes, reads, doesn't really matter. It should all be pretty transparent.
23:53 gildub joined #gluster
23:54 jackdpeterson2 So overall process as mentioned above would be to rm -rf [bad bricks] on bad server. get the trusted.gfid. get glusterd restarted...  and then fire off the full heal? or just let clients do their thing?
23:54 DV__ joined #gluster
23:54 jackdpeterson2 ^^ assuming that the glusterd service isn't running on bad server
23:57 jackdpeterson2 Trying to understand the benefit of the full heal in this scenario -- seems like a large blocking operation that couldn't be stopped or otherwise if things go sideways. Is there another route that can be taken that doesn't rely on the magic? e.g., an rsync?
23:58 JoeJulian It's not blocking.
23:58 JoeJulian The heal queue is established based on known changes to a file without that change happening to its replica.
23:59 JoeJulian If you've removed or lost files on a brick, the self-heal daemon won't know about that deficiency. A full crawl is necessary to find those missing files.

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary