Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-08-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 haomaiwa_ joined #gluster
00:02 nangthang joined #gluster
00:07 calavera joined #gluster
00:18 cc1 joined #gluster
00:23 nzero joined #gluster
00:39 morph- hi
00:39 glusterbot morph-: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
00:39 morph- anyone there
00:41 morph- JoeJulian
00:42 morph- I stopped my gluster volume to do some maintenance
00:42 morph- http://pastebin.ca/3087739
00:42 glusterbot Title: pastebin - Miscellany - post number 3087739 (at pastebin.ca)
00:42 morph- and now we have a split brain on gfid's
00:53 cc1 joined #gluster
00:59 morph- @gfid
00:59 glusterbot morph-: The gfid is a uuid that's assigned to represent a unique inode that can be identical across replicas. It's stored in extended attributes and used in the .glusterfs tree. See http://hekafs.org/index.php/2011/04/glusterfs-extended-attributes/ and http://joejulian.name/blog/what-is-this-new-glusterfs-directory-in-33/
00:59 morph- @split-brain
00:59 glusterbot morph-: To heal split-brains, see https://github.com/gluster/glusterfs/blob/master/doc/features/heal-info-and-split-brain-resolution.md . Also see splitmount https://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/ . For additonal information, see this older article https://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/
01:02 haomaiwa_ joined #gluster
01:06 DV joined #gluster
01:08 kdhananjay joined #gluster
01:20 gildub joined #gluster
01:20 gildub joined #gluster
01:22 PaulCuzner joined #gluster
01:26 cc1 left #gluster
01:39 nangthang joined #gluster
01:46 badone__ joined #gluster
01:49 Lee1092 joined #gluster
02:00 plarsen joined #gluster
02:02 haomaiwa_ joined #gluster
02:22 gem joined #gluster
02:28 harish joined #gluster
02:32 calavera joined #gluster
02:38 Administrator__ joined #gluster
02:52 DV_ joined #gluster
02:59 plarsen joined #gluster
03:00 bharata joined #gluster
03:02 haomaiwa_ joined #gluster
03:18 maveric_amitc_ joined #gluster
03:25 overclk joined #gluster
03:38 vmallika joined #gluster
03:42 atinm joined #gluster
03:47 sakshi joined #gluster
03:48 atinm joined #gluster
03:49 shubhendu joined #gluster
03:50 itisravi joined #gluster
03:53 kshlm joined #gluster
03:57 ppai joined #gluster
03:58 TheSeven joined #gluster
04:02 haomaiwa_ joined #gluster
04:02 nbalacha joined #gluster
04:08 meghanam joined #gluster
04:13 deepakcs joined #gluster
04:13 nishanth joined #gluster
04:22 gem joined #gluster
04:25 ramky joined #gluster
04:25 yazhini joined #gluster
04:26 jwd joined #gluster
04:27 rafi joined #gluster
04:30 jwaibel joined #gluster
04:30 kdhananjay joined #gluster
04:32 jiffin joined #gluster
04:37 ndarshan joined #gluster
04:38 DV joined #gluster
04:38 kotreshhr joined #gluster
04:40 kanagaraj joined #gluster
04:42 lalatenduM joined #gluster
04:45 ppai joined #gluster
04:48 rjoseph joined #gluster
04:49 ramteid joined #gluster
05:02 hgowtham joined #gluster
05:02 haomaiwang joined #gluster
05:05 ashiq joined #gluster
05:06 skoduri joined #gluster
05:08 aravindavk joined #gluster
05:09 uebera|| joined #gluster
05:09 Manikandan joined #gluster
05:12 hagarth joined #gluster
05:15 sripathi1 joined #gluster
05:16 dusmant joined #gluster
05:20 vimal joined #gluster
05:24 pppp joined #gluster
05:30 sripathi2 joined #gluster
05:31 mjrosenb I think I've asked this before, but is there a clean upgrade path from 3.3.0 to 3.7?
05:31 mjrosenb or 3.6?
05:31 mjrosenb I just want something that natively supports freebsd.
05:32 Bhaskarakiran joined #gluster
05:52 ramky joined #gluster
05:54 prabu joined #gluster
05:57 deepakcs joined #gluster
06:00 uebera|| joined #gluster
06:04 jwd joined #gluster
06:05 sakshi joined #gluster
06:05 jwd joined #gluster
06:06 jwaibel joined #gluster
06:09 spalai joined #gluster
06:12 skoduri joined #gluster
06:13 raghu joined #gluster
06:13 anil joined #gluster
06:20 ramky joined #gluster
06:20 vimal joined #gluster
06:22 RameshN joined #gluster
06:25 kevein joined #gluster
06:26 deepakcs joined #gluster
06:27 ppai joined #gluster
06:34 kshlm joined #gluster
06:34 kdhananjay joined #gluster
06:41 PaulCuzner joined #gluster
06:42 TvL2386 joined #gluster
06:45 nangthang joined #gluster
06:50 ppai joined #gluster
06:51 PaulCuzner left #gluster
06:56 rwheeler joined #gluster
06:58 arcolife joined #gluster
07:03 Manikandan joined #gluster
07:06 rjoseph joined #gluster
07:10 atalur joined #gluster
07:10 gem joined #gluster
07:12 kdhananjay joined #gluster
07:12 kdhananjay joined #gluster
07:14 Bhaskarakiran joined #gluster
07:17 [Enrico] joined #gluster
07:19 user_42 joined #gluster
07:22 mbukatov joined #gluster
07:26 Slashman joined #gluster
07:34 fsimonce joined #gluster
07:39 jcastill1 joined #gluster
07:40 user_43 joined #gluster
07:41 hgowtham joined #gluster
07:47 ppai joined #gluster
07:49 jcastillo joined #gluster
07:51 gildub joined #gluster
07:57 anrao joined #gluster
07:58 ctria joined #gluster
08:09 nsoffer joined #gluster
08:14 user_42 joined #gluster
08:16 maveric_amitc_ joined #gluster
08:16 LebedevRI joined #gluster
08:21 atalur joined #gluster
08:26 jcastill1 joined #gluster
08:31 jcastillo joined #gluster
08:37 ppai joined #gluster
08:37 gem joined #gluster
08:37 arcolife joined #gluster
08:37 nangthang joined #gluster
08:37 RameshN joined #gluster
08:37 anil joined #gluster
08:37 ndarshan joined #gluster
08:37 aaronott joined #gluster
08:37 mkzero joined #gluster
08:37 neoice joined #gluster
08:37 JPaul joined #gluster
08:37 jon__ joined #gluster
08:37 _fortis joined #gluster
08:37 DJClean joined #gluster
08:37 poornimag joined #gluster
08:37 RobertLaptop joined #gluster
08:41 xavih natarej: a single erasure set cannot use bricks with different sizes (it will always take the smallest one), however you should be able to build a distributed-disperse volume where each erasure set could be of different size
08:43 PaulCuzner joined #gluster
08:43 natarej xavih, thanks.  i haven't come across any documentation on doing different sets outside of tiering, i'll have another look now
08:51 natarej ah you mean using add-brick
08:52 ppai joined #gluster
08:54 hgowtham_ joined #gluster
09:00 Manikandan joined #gluster
09:00 xavih natarej: or when you create a volume, yes
09:01 xavih natarej: if you replace an existing brick with a bigger one, you won't be able to use the extra space until all other bricks of the same erasure set are replaced
09:03 rjoseph joined #gluster
09:11 TvL2386 joined #gluster
09:16 kshlm joined #gluster
09:16 DV joined #gluster
09:21 lchabert joined #gluster
09:21 lchabert Hi !
09:22 lchabert i'm experiencing very poor performances on my replica 3
09:22 Romeor halolha
09:22 lchabert somebody available to help me ?
09:22 ajames-41678 joined #gluster
09:22 Romeor how do you measure it?
09:23 lchabert (~3MB/s with through nfs sharing, same result with native client)
09:23 lchabert i'm using dd : dd if=/dev/zero of=test bs=8k count=10k
09:24 Romeor and what is your network backend?
09:24 Romeor 100/1g/10g/40g ?
09:24 lchabert 1G
09:25 s19n joined #gluster
09:25 Romeor and gluster version?
09:25 lchabert client and 3 servers connected with 1Gbit ethernet
09:25 Romeor any modifications to volume?
09:26 lchabert glusterfs 3.7.3 on centos 7
09:27 lchabert my volume is mounted over LVM:
09:27 lchabert 1 pv -> 1vg -> 2 lv
09:27 lchabert and one volume for each lv
09:27 ppai joined #gluster
09:28 lchabert yesterday, i have try to rezise down one LV
09:29 lchabert i have resize down one lv to create two new lv. So i have stop gluster volume, make a lvm resize, create a new lv, and reformat both new lv with xfs
09:29 lchabert next restart gluster volume, and created a new one
09:30 lchabert so now, i have 3 volumes
09:30 lchabert and all volumes are slow
09:31 yazhini joined #gluster
09:32 Romeor and what dd results directly on lvm  volume are?
09:33 lchabert good performances (~750MB/s)
09:33 Romeor and what if you try bs=2M ?
09:33 Romeor count =50
09:34 lchabert around ~300MB/s
09:34 Romeor its ok
09:35 Romeor you don't create 8k files much aint you?
09:37 Romeor if you do
09:37 Romeor dpkg -i deb-multimedia-keyring_2015.6.1_all.deb try this
09:37 Romeor ops
09:38 Romeor http://www.gluster.org/community/documentation/index.php/Translators/storage/bdb
09:38 Romeor this
09:38 Romeor idk if this is still actual. some1 of devs or JoeJulian could say
09:39 Romeor but testing glusterfs with bs=8k is very specific to me.
09:41 Romeor and just for information: replica performance = networking backend / replica count
09:41 Romeor so for replica 3 its lika 1gbps/3
09:43 paraenggu joined #gluster
09:43 paraenggu left #gluster
09:44 lchabert hum ok
09:44 lchabert thanks for this information, but on my case, this gluster storage will store kvm virtual disk
09:45 lchabert so large file (~20GB/disk)
09:45 Romeor then testing with 8k is pointless
09:46 * Romeor is using debian7+gluserfs 3.6.4 for KVM disks
09:46 Romeor and replica 2
09:46 Romeor no problems with performance
09:49 prabu joined #gluster
09:50 nishanth joined #gluster
09:52 shubhendu joined #gluster
09:53 nsoffer joined #gluster
09:54 DV joined #gluster
09:58 jiffin joined #gluster
09:58 msvbhat dsadasddasd
09:58 msvbhat ASd
09:58 msvbhat 'asd;'
09:58 rafi1 joined #gluster
09:58 Romeor asd yourself msvbhat
09:58 PaulCuzner joined #gluster
09:59 kshlm joined #gluster
09:59 Romeor weee
09:59 itisravi joined #gluster
09:59 atalur joined #gluster
09:59 sac joined #gluster
09:59 arcolife joined #gluster
09:59 overclk joined #gluster
09:59 kdhananjay joined #gluster
09:59 pppp joined #gluster
09:59 Bhaskarakiran joined #gluster
10:00 shubhendu joined #gluster
10:00 ashiq joined #gluster
10:00 lalatenduM joined #gluster
10:00 kotreshhr joined #gluster
10:00 kdhananjay joined #gluster
10:01 skoduri joined #gluster
10:01 rp_ joined #gluster
10:01 hgowtham_ joined #gluster
10:02 ndarshan joined #gluster
10:02 lchabert i have a replica 3 to avoid split-brain configuration (in case of one hardware faillure)
10:02 nishanth joined #gluster
10:04 Romeor these may happen with whatever replica configuration (imho). I've got split-brain only twice with replica 2.
10:04 Romeor and with VM-s its very easy to fix: just restore vm from backup :)
10:05 ppai joined #gluster
10:05 lchabert and which configuration do you use for client ? livirt storage pool ?
10:05 Romeor i use proxmox. it uses it's own storage
10:06 hagarth joined #gluster
10:06 Romeor it runs libgfapi out of the box
10:07 lchabert hum ok, because i have tested with libvirt storage pool, i don't understand how to set backup host
10:07 DV_ joined #gluster
10:07 Pupeno joined #gluster
10:09 lchabert and if i delete all my configuration, erase all disk and data, do you think that performance will be here after ?
10:10 lchabert i'm in pre-production project, so i can make it
10:10 yazhini joined #gluster
10:12 rwheeler joined #gluster
10:15 jcastill1 joined #gluster
10:16 Romeor i don't understand your problems with performance.. i don't see any. so why do you worry about it?
10:16 rjoseph joined #gluster
10:17 lchabert because when i write data directly on LVM, i have ~300MB/s, and with gluster client ~3MB/s
10:18 lchabert so i have a problem with glusterfs client (in my point of view)
10:18 lchabert but i don't undertand why ... :/
10:18 Romeor you have problem only with dd and bs=8k
10:18 Romeor it is not the right way to test
10:19 Romeor its ok to test so only if you will run some application that need lots of small files
10:19 Romeor if you will run glusterfs for vm images
10:19 Romeor it is right to test with at least bs=5M count=50
10:21 anrao joined #gluster
10:23 gildub joined #gluster
10:23 lchabert yes yes with big blocksize, 5M, same result, 3MB (100 times slower)
10:23 lchabert 3MB/s
10:23 lchabert as you can see :
10:23 lchabert dd if=/dev/zero of=/mnt/gluster/disk/test2 bs=5M count=50
10:23 lchabert 50+0 enregistrements lus
10:23 lchabert 50+0 enregistrements écrits
10:23 lchabert 262144000 octets (262 MB) copiés, 82,2005 s, 3,2 MB/s
10:24 Romeor oh, you've said a while ago that its ok with bs=5m
10:24 Romeor check your network. seems like something runs 100 mbps
10:24 lchabert no it's ok only when i write data directly on LVM
10:24 Romeor just to be sure
10:24 lchabert ok, let me check
10:25 Romeor run ethtool for every connected network card
10:25 Romeor it will show actual link state
10:26 lchabert iperf report said: 940 Mbits/sec
10:26 Romeor between every server and client?
10:27 vmallika joined #gluster
10:27 Romeor do i understand right that you've got 3 separate servers for each replica volume?
10:29 lchabert oh, one server has only 12Mbit/s bandwith Oo
10:29 Romeor hah
10:29 Romeor check its NIC with ethtool
10:30 RameshN joined #gluster
10:30 PaulCuzner joined #gluster
10:30 lchabert very strange ... this explain why. ethtool report 1000Mbit/s
10:31 Romeor check switch port for errors
10:32 Romeor maybe due to faulty cable also
10:32 jcastillo joined #gluster
10:43 ctria joined #gluster
10:48 Trefex joined #gluster
10:56 Trefex1 joined #gluster
10:58 ajames-41678 joined #gluster
10:58 ppai joined #gluster
11:00 firemanxbr joined #gluster
11:01 ndarshan joined #gluster
11:07 RameshN joined #gluster
11:08 shubhendu joined #gluster
11:15 foster joined #gluster
11:16 kanagaraj joined #gluster
11:18 Manikandan joined #gluster
11:20 kkeithley1 joined #gluster
11:20 elico joined #gluster
11:25 julim joined #gluster
11:26 _Bryan_ joined #gluster
11:33 harish joined #gluster
11:35 arcolife joined #gluster
11:38 R0ok_ joined #gluster
11:40 ppai joined #gluster
11:42 abyss^ joined #gluster
11:43 ndevos rafi1, itisravi: one of you hosts the bug triage meeting today?
11:43 lchabert yes i will check on network hardware, thanks romeor, i will update you if it solve my problem
11:45 itisravi ndevos: I'll have to give it a pass, sorry.
11:46 Manikandan joined #gluster
11:51 bennyturns joined #gluster
11:58 Philambdo joined #gluster
12:03 atalur joined #gluster
12:05 rafi joined #gluster
12:09 unclemarc joined #gluster
12:09 rafi joined #gluster
12:10 Manikandan joined #gluster
12:11 itisravi_ joined #gluster
12:19 elico joined #gluster
12:23 ppai joined #gluster
12:24 nbalacha joined #gluster
12:25 jcastill1 joined #gluster
12:26 magamo joined #gluster
12:27 magamo Hello.  I've a question about geo-replication in Gluster 3.7.X
12:27 magamo Is there a way to force a full crawl and resync?  I've got a couple of nodes at 'Faulty'.
12:30 jcastillo joined #gluster
12:38 maveric_amitc_ joined #gluster
12:38 Slashman joined #gluster
12:39 ajames41678 joined #gluster
12:42 jcastill1 joined #gluster
12:43 plarsen joined #gluster
12:45 ppai joined #gluster
12:54 rafi joined #gluster
12:58 mbukatov joined #gluster
12:59 jcastillo joined #gluster
13:03 Manikandan joined #gluster
13:04 jrm16020 joined #gluster
13:08 nsoffer joined #gluster
13:10 theron joined #gluster
13:10 Lee1092 joined #gluster
13:14 pppp joined #gluster
13:28 yazhini joined #gluster
13:33 arcolife joined #gluster
13:45 bennyturns joined #gluster
13:46 ashiq joined #gluster
13:47 spalai left #gluster
13:50 _dist joined #gluster
13:50 arcolife joined #gluster
13:51 hamiller joined #gluster
13:52 dijuremo joined #gluster
13:56 dgandhi joined #gluster
13:57 aaronott joined #gluster
13:57 ppai joined #gluster
14:00 B21956 joined #gluster
14:01 PaulCuzner joined #gluster
14:02 arcolife joined #gluster
14:03 cholcombe joined #gluster
14:06 nsoffer joined #gluster
14:07 haomaiwa_ joined #gluster
14:15 aaronott joined #gluster
14:18 bennyturns joined #gluster
14:19 eljrax Is GlusterFS replication strictly a server-side thing, or is the client involved in any way?
14:19 eljrax Or does it depend on whether I'm using the native client or NFS?
14:20 bennyturns ELCALOR, its client side for native client, it writes to both bricks at the same time
14:20 bennyturns for NFS / SMB there is an extra hop
14:20 bennyturns but fuse/glusterfs is smart enough to know where files live
14:20 eljrax What do you mean by an extra hop?
14:20 bennyturns client -> node mounted -> node where file hashes to
14:20 bennyturns NFS ^
14:21 l0uis the nfs daemon running on the gluster server does the replication (just like any gluster native client)
14:21 eljrax Ah right, so the server replicates the file on behalf of the client?
14:21 bennyturns client -> node file hashes to
14:21 bennyturns FUSE ^
14:21 bennyturns sorta, it puts the file where it goes if the file doesnt hash to itself
14:21 bennyturns since NFS / SMB isn't aware of DHT
14:22 eljrax How does this look with distributed+replicated volumes and the native client? I'm benchmarking distributed volumes as considerably slower than just replicated ones, and I can't quite work out where that comes from.
14:22 eljrax Like 13 Mb/s for 2x2 vs 20 Mb/s for 1x2
14:22 eljrax Thanks for clarifying that, makes sense
14:23 bennyturns dist rep should be == rep for single threaded perf
14:23 bennyturns dist rep will be faster for multithreads / multi file workloads
14:23 bennyturns eljrax, what version?
14:23 eljrax See that's what I thought too, it makes sense. But with any type of concurrency, I'm measuring slower performacne with distributed
14:24 eljrax 3.7.2
14:24 bennyturns what are you running to test?
14:24 eljrax I genuinely don't know why and it perplexed me so that I asked someone else to run their own benchmarks, and theirs is slower for dist as well
14:24 eljrax I've tried both sysbench and fio
14:24 * bennyturns tries
14:25 eljrax Various read/write ratios, mainly rndrw with a 70/30  r/w ratio
14:26 eljrax sysbench --test=fileio --file-num=128 --file-total-size=2G --file-test-mode=rndrw --file-io-mode=async --file-fsync-freq=100 --max-requests=50000 --num-threads=50 run
14:26 eljrax Happy to pastebin the fio job if that'd help
14:28 lchabert left #gluster
14:28 plarsen joined #gluster
14:29 nbalacha joined #gluster
14:32 bennyturns eljrax, https://paste.fedoraproject.org/251421/98713143/
14:32 glusterbot Title: #251421 Fedora Project Pastebin (at paste.fedoraproject.org)
14:32 bennyturns that is what I get
14:32 bennyturns I see up to 10% delta between runs depending on workload
14:32 bennyturns so I would call those statistically insignificent
14:33 eljrax bennyturns: I just did this: http://fpaste.org/251422/69875014/
14:33 glusterbot Title: #251422 Fedora Project Pastebin (at fpaste.org)
14:33 eljrax That's single thread and single file though. I haven't tested that
14:34 bennyturns eljrax, ya I thoguht thats wha you were talking about
14:34 bennyturns lemme do swomething multi threaded
14:34 eljrax So first run is on a 1x2 volume resulting in 19 Mb/s over 1200 iops , then add two bricks so it's a 2x2, rebalance, run the exact same benchmark, and it does down to 14 Mb/s over 900 iops
14:34 bennyturns dist rep will smoke it though
14:34 eljrax smoke it? :)
14:35 bennyturns ya lemme do 4 threads per client with 4 clients
14:35 bennyturns 1 on dist
14:35 bennyturns 1x2
14:35 bennyturns and 1 on 2x2
14:38 eljrax All my benchmarks are network capped fwiw, but I can't see why there'd be more network  traffic just because distribution is involved
14:39 eljrax The DHT stuff is happening all on the client, right? It doesn't do any extra round-trips when it's in distributed?
14:40 bennyturns eljrax, here is 2x2 https://paste.fedoraproject.org/251426/69923714/
14:40 glusterbot Title: #251426 Fedora Project Pastebin (at paste.fedoraproject.org)
14:41 rafi joined #gluster
14:43 hagarth joined #gluster
14:45 arao joined #gluster
14:47 eljrax bennyturns: That runs that test from four different hosts?
14:47 nishanth joined #gluster
14:47 bennyturns eljrax, 4 clients mounting a 2x2 vol on 4 servers backed by raid6 + 10g nics
14:47 * eljrax salivates slightly
14:47 eljrax ok
14:48 bennyturns I screwed up my config file, doing the 1x2 now with 4 clients
14:49 eljrax It better be faster, or I'll be even more confused than when I started :-)
14:50 Philambdo joined #gluster
14:56 ajames41678 joined #gluster
14:57 cyberswat joined #gluster
14:57 social joined #gluster
14:59 bennyturns eljrax, https://paste.fedoraproject.org/251430/14387003/
14:59 glusterbot Title: #251430 Fedora Project Pastebin (at paste.fedoraproject.org)
15:00 bennyturns that is the 1x2
15:00 bennyturns so there is a little descripency
15:00 haomaiwa_ joined #gluster
15:00 bennyturns I bet if I ran with more threads I could get it closer to 50/50
15:01 eljrax But yours is different the other way around it seems
15:01 eljrax So you're faster at 2x2
15:01 eljrax bbl, meeting time :/
15:01 m0zes joined #gluster
15:02 bennyturns ya it should be, maybe since you went from 1x2 to 2x2?
15:03 bennyturns did you do fix layout?>
15:03 bennyturns maybe try it with a fresh rep vol and a fresh dis-rep vol
15:03 bennyturns I just mkdir /brick1 and /brick2 on the same XFS
15:03 bennyturns and have both volumes at the sametime
15:03 bennyturns no need to recreate or add bricks
15:13 cc1 joined #gluster
15:16 ajames-41678 joined #gluster
15:16 sripathi joined #gluster
15:18 sripathi left #gluster
15:21 Gill joined #gluster
15:26 bennyturns eljrax, I think you are correct there is a perf hit for dist
15:27 bennyturns hmmmm
15:36 aaronott joined #gluster
15:39 harish joined #gluster
15:40 bennyturns here is what I get with a 2x1 vol:
15:40 bennyturns Children see throughput for 16 initial writers = 2110690.81 KB/sec
15:40 bennyturns line speed is 2.4
15:40 jbrooks joined #gluster
15:45 atalur joined #gluster
15:59 theron_ joined #gluster
16:00 haomaiwa_ joined #gluster
16:08 nzero joined #gluster
16:12 aaronott joined #gluster
16:15 gem joined #gluster
16:17 jiffin joined #gluster
16:22 calavera joined #gluster
16:23 daMaestro joined #gluster
16:25 arao joined #gluster
16:45 theron joined #gluster
17:00 haomaiwa_ joined #gluster
17:03 techsenshi joined #gluster
17:04 Peppard joined #gluster
17:06 _Bryan_ joined #gluster
17:24 skoduri joined #gluster
17:25 rafi joined #gluster
17:26 Rapture joined #gluster
17:34 julim joined #gluster
17:39 nsoffer joined #gluster
17:47 jatb joined #gluster
17:56 nzero joined #gluster
17:56 elico joined #gluster
17:56 ramky joined #gluster
18:00 haomaiwa_ joined #gluster
18:05 Peppard joined #gluster
18:06 calavera joined #gluster
18:14 chirino joined #gluster
18:16 jiffin joined #gluster
18:21 chirino joined #gluster
18:27 nzero joined #gluster
18:27 theron_ joined #gluster
18:30 _dist joined #gluster
19:00 haomaiwa_ joined #gluster
19:05 arao joined #gluster
19:05 plarsen joined #gluster
19:09 dijuremo joined #gluster
19:10 dijuremo So I was trying to get a new Virtualization server to my RHEV infrastructure and it is unable to mount my gluster volume.
19:10 dijuremo gluster servers are running 3.7.2 whereas the newly installed client has 3.7.3
19:11 kovshenin joined #gluster
19:11 dijuremo Error on client is:    Failed to fetch volume file (key:/volname)
19:12 dijuremo Looking around I saw some issues about it and so I was trying to update one server at a time.
19:12 kovshenin Hey folks, quick question. I'm moving a brick to a new server but didn't go through the whole replication process. Instead I used rsync to sync the whole brick (not from the mounted volume) to a new server and removed the .glusterfs directory in the brick. Is something going to break? Have I lost any data?
19:12 dijuremo Now that one server was stopped and updated to 3.7.3 It will not communicate with the other server still running 3.7.2 ....
19:15 _maserati joined #gluster
19:17 dijuremo I had done "rolling" upgrade like that in the past, but is there an issue between 3.7.2 and 3.7.3 ?
19:26 calavera joined #gluster
19:32 atalur joined #gluster
19:41 atalur joined #gluster
19:44 JoeJulian dijuremo: apparently. Need to do the servers before the clients.
19:44 JoeJulian kovshenin: probably going to break, yes.
19:45 kovshenin JoeJulian: anything I can do?
19:45 JoeJulian kovshenin: there's a chance it won't break if you copied the extended attributes as well.
19:45 kovshenin after I copied it I removed .glusterfs, now I'm rsyncing the remainings to /var/run/glusterfs/vol
19:46 JoeJulian @extended attributes
19:46 glusterbot JoeJulian: (#1) To read the extended attributes on the server: getfattr -m .  -d -e hex {filename}, or (#2) For more information on how GlusterFS uses extended attributes, see this article: http://pl.atyp.us/hekafs.org/index.php/2011/04/glusterfs-extended-attributes/
19:49 kovshenin so I guess I should recursively delete all extended attributes too, eh?
19:49 JoeJulian That will break it more.
19:50 kovshenin sorry I'm a bit lost here, I basically want to get my data in a clean way, as if it never were in gluster
19:50 JoeJulian GlusterFS is designed to have files added through the software. Directly modifying the bricks is like writing a chunk of data to the middle of your hard drive with dd and expecting xfs to know what to do with it.
19:51 kovshenin that's why I want to remove all attributes and then start a new fresh brick and rsync the "clean" files to the new brick via a mounted volume
19:52 dijuremo @JoeJulian, but that is the point, mixed versions does not seem to work with 3.7.2 and 3.7.3 so I need downtime for this/
19:52 dijuremo I had done all previous updates one server at a time with no downtime...
19:53 JoeJulian dijuremo: right. It's supposed to work. I've just had people coming in here saying it doesn't.
19:53 JoeJulian But I had someone try upgrading the servers before the clients and he reported success.
19:54 dijuremo I did try upgrading one server, then it would not connect to the other
19:54 dijuremo The server with updates kept showing disconnected for the other peers
19:55 dijuremo However, the server with 3.7.2 was showing in gluster peer status the updated server as connected...
19:59 _maserati_ joined #gluster
20:00 calavera joined #gluster
20:00 haomaiwa_ joined #gluster
20:02 __maserati__ joined #gluster
20:08 nzero joined #gluster
20:10 victori joined #gluster
20:11 JoeJulian Were you clients working?
20:11 JoeJulian Yeah, thinking about it, I would either schedule downtime or wait for this bug to be fixed.
20:13 sage_ joined #gluster
20:27 DV joined #gluster
20:43 jwd joined #gluster
20:46 nsoffer joined #gluster
20:47 badone_ joined #gluster
20:49 theron joined #gluster
21:00 theron_ joined #gluster
21:00 haomaiwa_ joined #gluster
21:01 nzero joined #gluster
21:03 B21956 joined #gluster
21:16 Larsen joined #gluster
21:29 timotheus1 joined #gluster
21:31 JoeJulian @apt
21:31 glusterbot JoeJulian: I do not know about 'apt', but I do know about these similar topics: 'afr', 'apc', 'api', 'ask'
21:31 JoeJulian @repo
21:31 glusterbot JoeJulian: I do not know about 'repo', but I do know about these similar topics: 'git repo', 'ppa repo', 'repos', 'repository', 'yum repo'
21:31 JoeJulian @ppa repo
21:31 glusterbot JoeJulian: semiosis' Launchpad PPAs have 32 & 64-bit binary packages of the latest Glusterfs for Ubuntu Lucid - Precise. http://goo.gl/DzomL (for 3.1.x) and http://goo.gl/TNN6N (for 3.2.x) and http://goo.gl/TISkP (for 3.3.x). See also @upstart
21:31 JoeJulian @forget ppa repo
21:31 glusterbot JoeJulian: The operation succeeded.
21:32 JoeJulian @repos
21:32 glusterbot JoeJulian: See @yum, @ppa or @git repo
21:32 theron joined #gluster
21:34 JoeJulian @learn ppa repo as The official PPAs can be found at https://launchpad.net/~gluster
21:34 glusterbot JoeJulian: The operation succeeded.
21:35 __maserati__ quit talking to the bot JoeJulian
21:36 JoeJulian :P
21:42 theron joined #gluster
21:48 calavera joined #gluster
21:54 nzero joined #gluster
22:00 haomaiwa_ joined #gluster
22:02 kkeithley_ @ppa
22:02 glusterbot kkeithley_: The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
22:02 kkeithley_ @ppa repo
22:02 glusterbot kkeithley_: The official PPAs can be found at https://launchpad.net/~gluster
22:05 julim joined #gluster
22:07 JoeJulian @forget ppa repo
22:07 glusterbot JoeJulian: The operation succeeded.
22:08 JoeJulian @alias "ppa" "ppa repo"
22:08 glusterbot JoeJulian: The operation succeeded.
22:08 JoeJulian All because I wanted to tell someone not to use 3.2...
22:11 RedW joined #gluster
22:21 autoditac joined #gluster
22:26 _maserati joined #gluster
22:31 _dist joined #gluster
22:47 Pupeno joined #gluster
22:51 cc1 left #gluster
23:00 aaronott joined #gluster
23:00 ajames-41678 joined #gluster
23:00 haomaiwa_ joined #gluster
23:03 calisto joined #gluster
23:04 calavera joined #gluster
23:07 gildub joined #gluster
23:23 plarsen joined #gluster
23:26 mjrosenb so, gluster 3.6 and 3.7 should work out of the box on freebsd?
23:28 Rapture joined #gluster
23:38 xaeth_afk joined #gluster
23:54 JoeJulian I know there are bsd regression tests, but I don't know which flavor.
23:54 necrogami joined #gluster
23:57 cyberswat joined #gluster
23:59 cyberswat joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary