Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-09-30

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:08 skylar1 joined #gluster
00:08 julim joined #gluster
00:37 squeakyneb joined #gluster
00:52 skylar1 joined #gluster
00:58 EinstCrazy joined #gluster
01:18 luis_silva joined #gluster
01:33 Lee1092 joined #gluster
01:35 RobertLaptop joined #gluster
01:37 calavera joined #gluster
01:49 rafi joined #gluster
01:59 nangthang joined #gluster
02:01 ir8 joined #gluster
02:03 overclk joined #gluster
02:15 haomaiwa_ joined #gluster
02:16 victori joined #gluster
02:22 haomaiwang joined #gluster
02:28 rafi joined #gluster
02:38 johndescs joined #gluster
02:42 haomaiwa_ joined #gluster
02:55 rafi joined #gluster
02:56 victori joined #gluster
02:57 calavera joined #gluster
02:58 victori joined #gluster
03:01 haomaiwa_ joined #gluster
03:10 rafi joined #gluster
03:22 nishanth joined #gluster
03:34 victori joined #gluster
03:39 nishanth joined #gluster
03:41 ramteid joined #gluster
03:44 TheSeven joined #gluster
03:46 nbalacha joined #gluster
03:47 harish_ joined #gluster
03:54 itisravi joined #gluster
03:57 kanagaraj joined #gluster
04:00 RameshN_ joined #gluster
04:01 haomaiwa_ joined #gluster
04:04 vimal joined #gluster
04:05 calavera joined #gluster
04:05 hchiramm_home joined #gluster
04:08 shubhendu joined #gluster
04:17 raghug joined #gluster
04:19 neha_ joined #gluster
04:20 gem joined #gluster
04:22 maveric_amitc_ joined #gluster
04:27 gildub joined #gluster
04:29 nbalacha joined #gluster
04:31 sakshi joined #gluster
04:35 RameshN_ joined #gluster
04:41 yazhini joined #gluster
04:56 kdhananjay joined #gluster
05:01 haomaiwa_ joined #gluster
05:02 ndarshan joined #gluster
05:03 victori joined #gluster
05:05 skoduri joined #gluster
05:11 pppp joined #gluster
05:16 kshlm joined #gluster
05:20 hgowtham joined #gluster
05:25 Manikandan joined #gluster
05:25 luis_silva joined #gluster
05:26 jiffin joined #gluster
05:28 beeradb joined #gluster
05:30 DV joined #gluster
05:32 ashiq joined #gluster
05:34 kanagaraj joined #gluster
05:38 Bhaskarakiran joined #gluster
05:41 kotreshhr joined #gluster
05:41 hchiramm joined #gluster
05:44 PaulCuzner joined #gluster
05:46 aravindavk joined #gluster
05:48 ndarshan joined #gluster
05:48 nishanth joined #gluster
05:49 rjoseph joined #gluster
05:50 ppai joined #gluster
05:51 deepakcs joined #gluster
05:59 rafi joined #gluster
06:01 haomaiwa_ joined #gluster
06:01 karnan joined #gluster
06:06 atalur joined #gluster
06:12 raghu joined #gluster
06:13 dusmant joined #gluster
06:14 jtux joined #gluster
06:17 akay hey guys, is it OK to change the cluster.stripe-block-size on a running volume?
06:20 atalur joined #gluster
06:28 shubhendu_ joined #gluster
06:30 skoduri joined #gluster
06:33 ramky joined #gluster
06:35 rafi hchiramm 10.70.43.100
06:37 shubhendu joined #gluster
06:38 Manikandan joined #gluster
06:40 mhulsman joined #gluster
06:40 ashiq joined #gluster
06:41 Saravana_ joined #gluster
06:42 rraja joined #gluster
06:46 LebedevRI joined #gluster
06:48 GB21 joined #gluster
06:53 mhulsman joined #gluster
06:54 nishanth joined #gluster
06:56 ndarshan joined #gluster
07:01 haomaiwa_ joined #gluster
07:01 vmallika joined #gluster
07:03 Saravana_ joined #gluster
07:07 Manikandan joined #gluster
07:09 Debloper joined #gluster
07:13 DV joined #gluster
07:20 [Enrico] joined #gluster
07:24 ppai joined #gluster
07:38 arcolife joined #gluster
07:41 vmallika joined #gluster
07:55 anil joined #gluster
07:55 anil joined #gluster
08:01 haomaiwa_ joined #gluster
08:23 skoduri joined #gluster
08:24 ndarshan joined #gluster
08:27 So4ring_ joined #gluster
08:31 Pupeno joined #gluster
08:31 neha__ joined #gluster
08:32 poornimag joined #gluster
08:42 Slashman joined #gluster
08:43 [Enrico] joined #gluster
08:49 itisravi joined #gluster
09:00 ctria joined #gluster
09:01 haomaiwa_ joined #gluster
09:04 ashiq joined #gluster
09:08 kovshenin joined #gluster
09:08 shubhendu joined #gluster
09:11 ndarshan joined #gluster
09:11 ppai joined #gluster
09:12 ndarshan joined #gluster
09:15 Saravana_ joined #gluster
09:17 yazhini joined #gluster
09:24 arcolife joined #gluster
09:26 poornimag joined #gluster
09:26 rjoseph joined #gluster
09:28 EinstCrazy joined #gluster
09:33 mhulsman joined #gluster
09:39 TvL2386 joined #gluster
09:43 poornimag joined #gluster
09:44 dusmant joined #gluster
09:46 muneerse joined #gluster
09:50 muneerse2 joined #gluster
09:59 sakshi joined #gluster
10:00 LebedevRI joined #gluster
10:01 Manikandan joined #gluster
10:01 arcolife joined #gluster
10:01 haomaiwa_ joined #gluster
10:07 EinstCrazy joined #gluster
10:11 vmallika joined #gluster
10:11 ashiq joined #gluster
10:13 ppai joined #gluster
10:16 Pupeno joined #gluster
10:16 mhulsman joined #gluster
10:16 haomaiwa_ joined #gluster
10:18 Manikandan joined #gluster
10:20 gem joined #gluster
10:21 bluenemo joined #gluster
10:21 spalai joined #gluster
10:22 aravindavk joined #gluster
10:33 mhulsman joined #gluster
10:35 mhulsman joined #gluster
10:38 Saravana_ joined #gluster
10:44 rafi joined #gluster
10:54 nbalacha joined #gluster
11:02 overclk joined #gluster
11:03 rafi1 joined #gluster
11:03 circle joined #gluster
11:08 Pupeno joined #gluster
11:12 luis_silva joined #gluster
11:17 shubhendu joined #gluster
11:18 EinstCra_ joined #gluster
11:20 nbalacha joined #gluster
11:22 shubhendu joined #gluster
11:33 skoduri joined #gluster
11:39 julim joined #gluster
11:40 GB21_ joined #gluster
11:40 rwheeler joined #gluster
11:45 TvL2386 joined #gluster
11:46 jayT7 joined #gluster
11:46 shubhendu_ joined #gluster
11:49 Saravana_ joined #gluster
11:51 kshlm joined #gluster
12:00 rafi joined #gluster
12:00 kshlm The weekly community meeting is starting now in #gluster-meeting
12:00 raghu joined #gluster
12:02 vmallika joined #gluster
12:03 kotreshhr left #gluster
12:03 julim joined #gluster
12:08 dusmant joined #gluster
12:09 ndarshan joined #gluster
12:10 skylar1 joined #gluster
12:16 spalai left #gluster
12:16 firemanxbr joined #gluster
12:17 anil joined #gluster
12:20 jrm16020 joined #gluster
12:20 ppai joined #gluster
12:29 [Enrico] joined #gluster
12:34 shubhendu__ joined #gluster
12:34 julim joined #gluster
12:37 kanagaraj joined #gluster
12:40 unclemarc joined #gluster
12:47 haomaiwa_ joined #gluster
12:50 chirino joined #gluster
12:59 ppai joined #gluster
13:00 atinm joined #gluster
13:01 haomaiwa_ joined #gluster
13:02 haomaiwa_ joined #gluster
13:04 Saravana_ joined #gluster
13:07 Saravana_ joined #gluster
13:07 overclk joined #gluster
13:08 mpietersen joined #gluster
13:08 Saravana_ joined #gluster
13:10 plarsen joined #gluster
13:11 GB21 joined #gluster
13:15 julim joined #gluster
13:27 ramky joined #gluster
13:27 julim joined #gluster
13:28 shaunm joined #gluster
13:29 klaxa|work joined #gluster
13:33 shyam joined #gluster
13:36 dgandhi joined #gluster
13:37 spcmastertim joined #gluster
13:43 harold joined #gluster
13:44 sakshi joined #gluster
13:45 skylar joined #gluster
13:47 ramky joined #gluster
13:51 julim joined #gluster
13:52 bennyturns joined #gluster
13:54 harish_ joined #gluster
14:01 haomaiwa_ joined #gluster
14:07 _maserati joined #gluster
14:07 julim joined #gluster
14:14 haomaiwang joined #gluster
14:23 TvL2386 joined #gluster
14:31 nbalacha joined #gluster
14:37 coredump joined #gluster
14:43 haomaiwa_ joined #gluster
14:47 ppai joined #gluster
14:47 TheCthulhu joined #gluster
14:49 akay hey guys, does the cluster.stripe-block-size setting affect all volume types?
14:51 DaKnObCS joined #gluster
14:53 DaKnObCS Hey guys.. Quick question.. I need to setup a brand new Gluster Cluster and I need TLS Encryption *AND* Authentication between the nodes. The nodes will be available on the public internet. I want to ensure that only the nodes themselves can contact the others so nobody can simply connect and replicate all the data. For this I found https://kshlm.in/network-encryption-in-glusterfs/
14:53 glusterbot Title: Setting up network encryption in GlusterFS (at kshlm.in)
14:53 DaKnObCS This also has the client-to-server encryption which is another nice feature
14:54 DaKnObCS My question is.. Can I use the same certificate for all nodes or at least have a CA and not have to explicitly specify each common name in every server for every volume?
14:54 DaKnObCS The main concern is that we might start with two nodes but then later on require two more. Then four more, etc.
14:54 DaKnObCS This would require going through every node and adding manually every other node and then setting up encryption and everything
14:55 DaKnObCS Is there a setting where the nodes accept anything signed by the CA I am adding?
14:55 DaKnObCS I was thinking of using the same Common Name and then signing all certificates with the CA but this is not the best practice to follow.
14:57 calavera joined #gluster
15:01 haomaiwa_ joined #gluster
15:02 sakshi joined #gluster
15:05 DaKnObCS Any comments?
15:07 arielb joined #gluster
15:10 haomaiwang joined #gluster
15:13 nishanth joined #gluster
15:18 papamoose joined #gluster
15:18 akay are there any performance settings to getting a gluster volume to perform better with different RAID strip sizes?
15:20 haomaiwa_ joined #gluster
15:24 DaKnObCS_ joined #gluster
15:24 Pupeno joined #gluster
15:31 julim joined #gluster
15:35 victori joined #gluster
15:36 gem joined #gluster
15:36 kenansulayman joined #gluster
15:37 cholcombe joined #gluster
15:37 julim joined #gluster
15:38 haomaiwa_ joined #gluster
15:38 plarsen joined #gluster
15:51 moviuro_ joined #gluster
15:51 moviuro_ hi all! quick question: is there a corporate sponsor for GlusterFS? (like RH for Ceph)
15:51 nishanth joined #gluster
15:52 clutchk joined #gluster
15:53 m0zes joined #gluster
15:53 haomaiwa_ joined #gluster
15:53 clutchk Hey all, Just a quick questions is there a way to startup just one brick or do you have to bounce all of glusterd?
15:54 msvbhat clutchk: 'gluster volume start <volname> force' tries to start all processes which might have been down
15:55 msvbhat clutchk: You don't need to bounce glusterd
15:55 msvbhat moviuro_: RH sponsors gluster
15:56 msvbhat moviuro_: Red Hat provides corporate support for GlusterFS
15:56 csaba joined #gluster
15:56 moviuro_ msvbhat: oh. Don't Ceph and GlusterFS actually achieve the same thing? (distributed storage "software-defined storage"?)
15:56 msvbhat But there might be other companies doing that as well. I do not know the names
15:57 msvbhat moviuro_: But they both have strong points in different areas of software-defined-stoareg
15:58 msvbhat GlusterFS is string in general file storage and Ceph is good at object storage
15:58 moviuro_ https://www.redhat.com/fr/techn​ologies/storage/storage-server & https://access.redhat.com/pr​oducts/red-hat-ceph-storage
15:58 glusterbot Title: Stockage sur site, dans un cloud hybride ou public | Red Hat (at www.redhat.com)
15:58 msvbhat *strong
15:58 moviuro_ hm, well, good to know that there is a big company backing those projects ;)
15:59 moviuro_ thank you for the pointers, msvbhat
15:59 msvbhat moviuro_: http://searchstorage.techtarget.com/new​s/4500249075/Red-Hat-Gluster-Ceph-stora​ge-roadmaps-laid-out-at-Red-Hat-Summit
15:59 glusterbot Title: Red Hat Gluster, Ceph storage roadmaps laid out at Red Hat Summit (at searchstorage.techtarget.com)
16:00 msvbhat moviuro_: ^^ That should be of help
16:00 moviuro_ I'm not a member, can't see the contents :/
16:01 haomaiwang joined #gluster
16:01 moviuro_ ah wait, had to scroll a looong way down
16:03 DaKnObCS_ msvbhat: Any ideas about the TLS Config I mentioned?
16:04 * msvbhat searching for DaKnObCS_'s question
16:04 * DaKnObCS_ thanks msvbhat
16:06 msvbhat DaKnObCS_: You can use CA instead of self signing certificates
16:06 DaKnObCS_ My main problem is that I have to include all legal common names in each node
16:07 haomaiwa_ joined #gluster
16:07 msvbhat DaKnObCS_: I am not sure I understand your problem correctly
16:07 DaKnObCS_ Well, I want to start a 2-node cluster that will run on *public* IPs
16:08 DaKnObCS_ I need to have some authentication so only nodes I approve can connect and read/write to the volumes
16:08 DaKnObCS_ So a random guy cannot peer with my glusterd
16:08 DaKnObCS_ And also they cannot mount it
16:08 DaKnObCS_ For this I figured out I could use TLS Authentication & Encryption
16:09 DaKnObCS_ The problem is that just like auth.allow where I have to specify all the IP addresses of all nodes in all nodes
16:09 DaKnObCS_ With this configuration I have to specify all certificate CN fields of all nodes in all nodes
16:09 luis_silva joined #gluster
16:09 DaKnObCS_ My question is whether I can add a CA and then use the same CN for *all* nodes
16:10 bala joined #gluster
16:10 bala left #gluster
16:11 haomaiwang joined #gluster
16:11 msvbhat DaKnObCS_: AFAIk yes, you can
16:12 DaKnObCS_ So this way I only have to add one CN to the allow list
16:12 DaKnObCS_ Should I use allow-ssl for the common name and reject for *?
16:12 msvbhat DaKnObCS_: I am not an expert in the $subject. But I think yes.
16:12 DaKnObCS_ Me neither.. This is why I figured I could ask somebody else
16:12 msvbhat DaKnObCS_: But kshlm would throw more light on it
16:13 DaKnObCS_ I know this is a good practice but since GlusterFS lacks a better authentication system I think it's a viable alternative
16:13 moviuro_ left #gluster
16:13 DaKnObCS_ I wonder what'll happen if I add the Certificate Authority's CN there.. Will it follow the entire chain all the way to the node?
16:14 msvbhat DaKnObCS_: Ah, I have no idea. Sorry
16:14 msvbhat kshlm, would know
16:15 msvbhat DaKnObCS_: You can send the question to @gluster-users ML and someone would answer.
16:15 msvbhat If you don't find it here, that is
16:15 DaKnObCS_ I was looking for a quick answer.. I might just run a test setup and find it out and then let the list know for future reference.. :-)
16:16 DaKnObCS_ I'll leave that open here in case somebody already knows the answer
16:18 msvbhat DaKnObCS_: Sure,
16:18 DaKnObCS_ Thanks again!
16:18 msvbhat DaKnObCS_: Welcome, happy to help whatever little I can :)
16:19 haomaiwa_ joined #gluster
16:20 haomaiwang joined #gluster
16:21 m0zes joined #gluster
16:24 Pupeno joined #gluster
16:34 beeradb_ joined #gluster
16:34 beeradb_ can anyone tell me what the equivalent package is for glusterfs-geo-replication in ubuntu?
16:45 m0zes joined #gluster
16:52 ilbot3 joined #gluster
16:52 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
16:58 neofob joined #gluster
17:00 skoduri joined #gluster
17:01 shyam joined #gluster
17:03 ramky joined #gluster
17:04 CyrilPeponnet Hi guys, is there a way to know when a replicate volume is replicated across the brick when I add a new brick ?
17:05 CyrilPeponnet I mean a way to know if the new brick is fully populated
17:12 anil joined #gluster
17:13 unicky joined #gluster
17:16 Rapture joined #gluster
17:19 atinm joined #gluster
17:21 rwheeler joined #gluster
17:26 TheCthulhu left #gluster
17:30 dlambrig joined #gluster
17:39 mhulsman joined #gluster
17:42 ekuric1 joined #gluster
17:47 luis_silva joined #gluster
18:07 So4ring_ joined #gluster
18:16 AGTT joined #gluster
18:18 JulienLanglois joined #gluster
18:19 JulienLanglois Hi, I have an issue with Gluster >=3.6 on Ubuntu trusty
18:20 JulienLanglois I don't have the issue on version <3.6
18:20 JulienLanglois The context is 2 nodes having a replicated volume (1 brick by node)
18:20 JulienLanglois both nodes are both client and server for the GlusterFS point of view
18:21 JulienLanglois when the installation is done and ok, I shutdown the 2 nodes
18:21 JulienLanglois then I start only the first one
18:21 JulienLanglois once started, I logged on my system and try to mount my volume => it fails
18:22 JulienLanglois Logs says that none of the nodes are available
18:22 firemanxbr joined #gluster
18:22 JulienLanglois Investigations shows that it fails because the glusterfsd is not started
18:23 JulienLanglois when I turn on the second node, the glusterfsd starts on my first node and I am able to start my volume
18:23 JulienLanglois Is that a bug or a normal behavior ?
18:24 JulienLanglois Same scenario run without issue on Gluster <3.6
18:24 AGTT Hi everyone! I'm trying to solve a failed healing of a replicated vol with 2 bricks (Gluster version 3.7.2-2 (Arch linux OS x86_64)). I'm getting a list of gfids from the 'heal info' command for (only) 1 brick, and it seems that removing the destination file/folder removes the gfid entry from the command.
18:25 Pupeno joined #gluster
18:27 AGTT First of all, the list of gfids from 'gluster volume heal <vol> info' do represent failed heals, right? It just says Brick: <brick> and number of entries.
18:29 atinm joined #gluster
18:30 AGTT @JulienLanglois: I remember having similar issues with my setup some time ago. I don't remember the version, though; I don't remember how I resolved it though. A replicated volume with one brick per server; when only 1 server is powered on from having both off, the volume seemed to not be completely available.
18:32 JulienLanglois Hi AGTT, thank you for the feedback, that confirm my though.
18:32 JulienLanglois I did made test on versions: 3.4, 3.5, 3.6, 3.7 using Ubuntu PPA
18:32 JulienLanglois 3.4 & 3.5 works
18:32 JulienLanglois 3.6 & 3.7 does not work
18:34 AGTT Does this work: mounting the volume manually after logging in with 1 server powered on?
18:34 AGTT I'm assuming you are trying mounting at boot...
18:35 AGTT Is that true?
18:35 JulienLanglois I tried both: mounting at boot and mounting manually after boot
18:35 JulienLanglois same result
18:36 JulienLanglois The problem seems to come from the server component: it does not start the volume until the second node turns on
18:37 AGTT Would it be possible to try with a fresh volume with 3.6 or 3.7?
18:38 JulienLanglois AGTT, what do you mean by a fresh volume ?
18:38 AGTT Create a temporary testing volume.
18:39 JulienLanglois oh sure, that is what I did
18:39 JulienLanglois I started fresh for each test
18:40 AGTT ... for each version?
18:40 AGTT ok...
18:40 AGTT I'm thinking network issues...
18:42 JulienLanglois 3.4.2OfficialupstartOK
18:42 JulienLanglois 3.4.7PPAupstartOK
18:42 JulienLanglois 3.5.6PPAupstartOK
18:42 JulienLanglois 3.6.6PPAupstartNOK
18:42 JulienLanglois 3.7.4PPAupstart, initscriptNOK
18:43 julim joined #gluster
18:43 AGTT I noticed that the network needs to be online to be able to have volume or even gluster services online, if I remember correctly...
18:43 TheCthulhu2 joined #gluster
18:43 AGTT ... but you said that previous version worked.
18:43 TheCthulhu2 left #gluster
18:43 AGTT versions*
18:45 dlambrig joined #gluster
18:50 JulienLanglois yes, and having the same though, I tried restarting the service manually after reboot
18:50 JulienLanglois same result
18:51 JulienLanglois Should I report a bug or send an email ?
18:52 anil joined #gluster
18:56 JoeJulian JulienLanglois: file a bug report
18:56 glusterbot https://bugzilla.redhat.com/en​ter_bug.cgi?product=GlusterFS
18:58 JulienLanglois thank you
19:05 primehax_ joined #gluster
19:14 dlambrig joined #gluster
19:16 _maserati If a file is deleted in gluster.... is it really deleted or is there a way to recover?
19:17 JoeJulian _maserati: unless you set features.trash on, it's deleted.
19:18 JoeJulian http://www.gluster.org/community/doc​umentation/index.php/Features/Trash
19:22 _maserati does gluster make sure its gone or is it worth disconnecting a node and trying some file recovery steps
19:24 _maserati and also i know how to set commands, but how do i view commands that are set?
19:25 JoeJulian gluster just passes the unlink fop to the filesystem(s), so the filesystem handles deleting it.
19:26 JoeJulian gluster volume get $vol
19:26 _maserati JoeJulian: unrecognized word: get (position 1)
19:26 JoeJulian 3.7?
19:26 _maserati nooo
19:27 _maserati 3.4 i think =\
19:27 _maserati 3.6.1
19:27 _maserati sorry
19:33 JoeJulian Then "gluster volume info $vol" and anything that's still set to default won't show.
19:33 dlambrig joined #gluster
19:34 _maserati awesome! it's not set! /shoots self in head
19:34 JoeJulian Oh, and trash won't be there.
19:34 JoeJulian it's in 3.7
19:34 _maserati Is the possibility of pulling the file off from the brick itself worth pursueing? You know: Disconnect a node, and doctor that brick
19:35 JoeJulian I don't know the value of your file.
19:35 _maserati high
19:35 JoeJulian Theoretically you could pull a brick and maybe find it.
19:36 _maserati there any useful tools for undeleting files in linux? ive never done this in linux
19:36 JoeJulian Assuming you can find the deleted inode.
19:36 JoeJulian Not really.
19:36 _maserati shiiiit
19:36 _maserati wasnt my fault thank god
19:36 _maserati but this isnt good for business
19:36 JoeJulian I've found stuff by taking a dd image and searching through it for stuff I know is there.
19:37 _maserati but again ive been hounding them for months to do real backups
19:37 JoeJulian Yeah, that's a mantra you have to chant when you're in business for yourself.
19:37 JoeJulian Been there myself.
19:39 _maserati omg they just sent me the list of files they deleted... 800
19:39 _maserati and they are all very encrypted
19:39 JoeJulian NFW.
19:40 _maserati son of a bitch
19:40 _maserati oh well, thanks for your input JoeJulian
19:40 _maserati as always
19:41 JoeJulian If they weren't encrypted you might have had a chance.
19:44 unicky joined #gluster
19:47 harold joined #gluster
19:54 luis_silva How hard is it to go from 3.3 to 3.5?
19:54 luis_silva or is 3.7 recommended these days?
19:56 luis_silva er sorry i meant 3.6
19:57 Trefex joined #gluster
19:57 JoeJulian I'm leaning more and more toward 3.7
19:57 JoeJulian Which distro?
20:05 luis_silva CentOS
20:05 luis_silva el6
20:06 JoeJulian It's been long enough I don't recall if you can go from 3.3 to newer without downtime, but everything else about it is as easy as installing the new rpms.
20:06 luis_silva I've got this in version available in our yum mirrors: 3.5.3-1.el6
20:06 JoeJulian @latest
20:06 glusterbot JoeJulian: The latest version is available at http://download.gluster.org/p​ub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
20:07 luis_silva ok cool.
20:08 luis_silva so in general stop gluster on all system, yum update and then start it back up again.
20:10 JoeJulian yes
20:11 _maserati it sounds so easy that it scares me
20:20 hchiramm_home joined #gluster
20:28 gem joined #gluster
20:34 CyrilPeponnet Hi @JoeJulian :)
20:35 CyrilPeponnet any idea how to check if a brick in a replicated volume is well "replicated" ?
20:41 skylar joined #gluster
20:41 F2Knight joined #gluster
20:43 F2Knight Hi guys, playing with Gluster. so far much easier then DRBD to deploy on an existing system..
20:43 F2Knight Q: do you 'have' to mount the brick to write data to it.?
20:44 F2Knight on my setup I have /glusterfs/gv0 as my brick
20:44 JoeJulian CyrilPeponnet: gluster volume heal $vol info
20:44 F2Knight it is mounted to /mnt/glusterfs/gv0 and if I add data there I can see it on the remote side.
20:45 F2Knight What I was hopping was that /glusterfs/gv0 would be able to have data written to it that would be readable on the remote side...
20:45 JoeJulian F2Knight: yes, writing to the brick is forbidden.
20:46 F2Knight my reason is the remote server I intend to symlink folders to /glusterfs/gv0
20:46 F2Knight JoeJulian thank you.. okay so if writing to the block is forbidden, and we have to mount it to write it... can multiple servers mount their local copy?
20:46 JoeJulian They can mount the volume locally, yes.
20:47 F2Knight and if a slave 'writes' to that copy, what happens?
20:47 JoeJulian Don't mistake "copy" for "replicated volume"
20:47 F2Knight umm.. I am on a replica volume
20:47 JoeJulian The writes are synchronous.
20:47 F2Knight so its 2way sync
20:48 JoeJulian The client writes the data to all replicas synchronously.
20:49 JoeJulian So if you have a replica 2 volume, it will write to 2 bricks simultaneously.
20:49 F2Knight I suppose then for my expectations... (having the slaves be able to startup services on a failover) that I should perhaps mount the directory locally as RO first and have keepalived switch them to rw ?
20:49 F2Knight My biggest concern is how to handle replicating MySQL data.
20:49 JoeJulian gluster has quorum capabilities.
20:50 JoeJulian What's the latency like between servers?
20:50 F2Knight less then 2ms
20:50 F2Knight this is side by side PBX installs.
20:50 JoeJulian Should be fine then.
20:50 F2Knight on a dedicated nic
20:51 JoeJulian Is there a 3rd box that could be used as an arbitrator for quorum? That would be best.
20:51 F2Knight intent is to run keepalived to handle IP failover, and keep each system 'synced' at all times.
20:51 JoeJulian No need for failover.
20:51 F2Knight I don't follow?
20:52 JoeJulian Actually...
20:52 JoeJulian This is only for mysql?
20:52 F2Knight the intent here is to have a failover PBX system.
20:52 JoeJulian Ah, cool.
20:52 skylar joined #gluster
20:52 F2Knight so a bunch of static files (asterisk directories no big deal) and the MySQL /var/lib/mysql
20:53 F2Knight Intent is only the master would writing data. on failover the secondary node would rw to the database and configs
20:53 cliluw Is it more conventional to mount Gluster bricks under /data/ or under /export/?
20:54 JoeJulian cliluw: According to FHS it seems it should be either /data or /srv
20:54 JulienLanglois joined #gluster
20:54 F2Knight So I suppose the right way to do this then would be to mount each local brick, as readonly and use mount --bind to remount the directories to the proper spaces.
20:55 So4ring_ joined #gluster
20:55 F2Knight JoeJulian is there good documentation on how to make a node client/server using the .vol files? My testing with them doesn't  let me access anything. but I am totally able to if I mount it manually.
20:56 cliluw JoeJulian: The Wikipedia page about the FHS doesn't say anything about /data.
21:06 gildub joined #gluster
21:17 JoeJulian F2Knight: Not really. Writing .vol files by hand isn't supported.
21:18 F2Knight JoeJulian, umm okay . is there a copy of them some place? or is the right way to mount the services just to modify fstab to mount them?
21:19 JoeJulian cliluw: Sure enough. Good. I guess that makes that less ambiguous. /srv it is.
21:20 JoeJulian F2Knight: "mount -t glusterfs $server:$volume $mountpoint" or the fstab equivalent.
21:21 F2Knight yea okay thats what I have been doing.
21:21 JoeJulian If you're worried about mounting if $server is down:
21:21 JoeJulian @mount server
21:21 glusterbot JoeJulian: (#1) The server specified is only used to retrieve the client volume definition. Once connected, the client connects to all the servers in the volume. See also @rrdns, or (#2) One caveat is that the clients never learn of any other management peers. If the client cannot communicate with the mount server, that client will not learn of any volume changes.
21:41 plarsen joined #gluster
21:42 jobewan joined #gluster
22:11 coredump joined #gluster
22:16 CyrilPeponnet @purpleidea around?
22:17 CyrilPeponnet ->  /usr/sbin/gluster volume start my_vol returned 1 instead of one of [0] when volume is already started. I don't understand why it tries to start a started volume
22:27 frostyfrog joined #gluster
22:27 frostyfrog joined #gluster
22:29 CyrilPeponnet ok looks like just after a gluster restart issuing gluster vol status report volumes as stoped
22:29 CyrilPeponnet so it tried to start them again
22:33 Pupeno joined #gluster
22:37 shyam joined #gluster
23:11 calavera joined #gluster
23:17 plarsen joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary