Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-11-04

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:02 zhangjn joined #gluster
00:23 cyberbootje joined #gluster
00:27 bennyturns joined #gluster
00:41 _Bryan_ joined #gluster
00:48 mlhamburg_ joined #gluster
00:56 dblack joined #gluster
00:58 zhangjn joined #gluster
01:08 gildub joined #gluster
01:17 plarsen joined #gluster
01:20 calavera joined #gluster
01:28 Lee1092 joined #gluster
01:37 shyam joined #gluster
01:42 mjrosenb does glusterfs-3.7 automatically move files between nodes when a rename changes what node the file should be on
01:42 mjrosenb s/node/brick/
01:42 glusterbot What mjrosenb meant to say was: does glusterfs-3.7 automatically move files between bricks when a rename changes what node the file should be on
01:43 mjrosenb s/node/brick/g
01:43 glusterbot mjrosenb: Error: u's/node/brick/g does glusterfs-3.7 automatically move files between nodes when a rename changes what node the file should be on' is not a valid regular expression.
01:43 mjrosenb glusterbot: YES
01:43 glusterbot mjrosenb: I do not know about 'YES', but I do know about these similar topics: 'yum'
01:43 mjrosenb that is totally a valid regular expression, and in fact the one I should have used originally.
01:54 harish_ joined #gluster
02:07 pakha joined #gluster
02:09 sejur1004 joined #gluster
02:10 aravindavk joined #gluster
02:15 dgbaley joined #gluster
02:25 ccha joined #gluster
02:33 nangthang joined #gluster
02:42 sejur1004 anybody know how to change volume name ?
02:46 sejur1004 gluster DOC say using command " volume rename VOLNAME NEW-VOLNAME"  but it doesn't work
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:48 beeradb joined #gluster
02:54 haomaiwa_ joined #gluster
03:01 haomaiwa_ joined #gluster
03:06 bharata-rao joined #gluster
03:06 coreping joined #gluster
03:07 harish joined #gluster
03:09 haomaiwa_ joined #gluster
03:30 overclk_ joined #gluster
03:39 stickyboy joined #gluster
03:40 kdhananjay joined #gluster
03:46 itisravi joined #gluster
03:47 itisravi joined #gluster
03:54 JoeJulian sejur1004: What? What doc?
03:56 hagarth joined #gluster
03:58 shubhendu joined #gluster
04:04 atinm joined #gluster
04:06 dlambrig__ joined #gluster
04:08 leucos_ joined #gluster
04:09 Pintomatic_ joined #gluster
04:09 liewegas joined #gluster
04:10 fyxim_ joined #gluster
04:11 lanning_ joined #gluster
04:11 prg3_ joined #gluster
04:12 neha_ joined #gluster
04:13 JoeJulian sejur1004: Hah! I've had this conversation before: http://irclog.perlgeek.de/gluster/2013-07-29#i_7384793
04:13 glusterbot Title: IRC log for #gluster, 2013-07-29 (at irclog.perlgeek.de)
04:13 JoeJulian But there's still no "volume rename"
04:14 rich0dify joined #gluster
04:18 gem joined #gluster
04:24 DV joined #gluster
04:24 kanagaraj joined #gluster
04:24 shubhendu joined #gluster
04:26 ramteid joined #gluster
04:28 atinm joined #gluster
04:30 [7] joined #gluster
04:31 zhangjn joined #gluster
04:31 ashiq joined #gluster
04:32 zhangjn joined #gluster
04:35 calavera joined #gluster
04:35 RameshN joined #gluster
04:39 overclk joined #gluster
04:44 Manikandan joined #gluster
04:46 calavera joined #gluster
04:49 kshlm joined #gluster
04:51 deepakcs joined #gluster
04:52 pppp joined #gluster
04:53 sejur1004 help
04:55 sejur1004 @JoeJulian thank you for your information. i will check this "http://irclog.perlgeek.de/gluster/2013-07-29#i_7384793"
05:00 F2Knight joined #gluster
05:02 vimal joined #gluster
05:04 ppai joined #gluster
05:12 eberx joined #gluster
05:13 jiffin joined #gluster
05:13 shubhendu joined #gluster
05:14 anil joined #gluster
05:16 ndarshan joined #gluster
05:19 kotreshhr joined #gluster
05:19 Bhaskarakiran joined #gluster
05:26 shubhendu joined #gluster
05:26 hchiramm joined #gluster
05:31 hagarth joined #gluster
05:31 haomaiwa_ joined #gluster
05:35 tomatto joined #gluster
05:35 hgowtham joined #gluster
05:37 kdhananjay joined #gluster
05:38 overclk_ joined #gluster
05:39 ramky joined #gluster
05:46 rafi joined #gluster
05:52 nbalacha joined #gluster
05:56 rafi1 joined #gluster
05:56 skoduri joined #gluster
05:57 kovshenin joined #gluster
05:59 vmallika joined #gluster
06:01 haomaiwang joined #gluster
06:01 gem joined #gluster
06:14 hagarth joined #gluster
06:24 poornimag joined #gluster
06:33 JPaul joined #gluster
06:40 nishanth joined #gluster
06:43 coredump|br joined #gluster
06:46 bhuddah joined #gluster
06:50 bharata-rao joined #gluster
06:55 GB21_ joined #gluster
06:55 GB21 joined #gluster
07:01 haomaiwang joined #gluster
07:03 jwd joined #gluster
07:05 nbalacha joined #gluster
07:06 nbalacha joined #gluster
07:11 LebedevRI joined #gluster
07:12 jtux joined #gluster
07:19 gem joined #gluster
07:32 Norky joined #gluster
07:40 rafi joined #gluster
07:54 suliba joined #gluster
08:01 haomaiwang joined #gluster
08:07 [Enrico] joined #gluster
08:14 ivan_rossi is anyone (other than the FB people) using btrfs for the brick's filesystem?
08:14 JoeJulian I use it for other things, but not bricks.
08:15 fsimonce joined #gluster
08:15 ivan_rossi any reason or that? you tested zfs for bricks IIRC
08:17 JoeJulian No reason. I don't need the feature set. zfs killed my 8 core 64gb systems, mostly with cpu load.
08:18 ivan_rossi yeah. i heard zfs is a hog if you use stuff like dedup and compression
08:19 JoeJulian If the devs had written snapshotting to be less lvm specific, btrfs would have been a much more useful choice.
08:19 ivan_rossi i have been wondering about btrfs to try off-line dedup, i guess i will have to test it by myself, if i have the time
08:19 ivan_rossi s/off-line/batch/
08:19 glusterbot What ivan_rossi meant to say was: i have been wondering about btrfs to try batch dedup, i guess i will have to test it by myself, if i have the time
08:20 ivan_rossi thanks bot :-D
08:22 ivan_rossi maybe you can talk devs into moving away from lvm snaps for 4.0 :-D
08:23 JoeJulian Pretty sure that's on the roadmap already.
08:26 ivan_rossi nice!
08:26 nbalacha joined #gluster
08:33 mlhamburg joined #gluster
08:42 rafi1 joined #gluster
08:47 harish joined #gluster
08:50 harish joined #gluster
08:51 mkzero joined #gluster
08:52 aravindavk joined #gluster
08:59 overclk joined #gluster
09:00 shell|mextli joined #gluster
09:00 shell|mextli Hello everyone!
09:01 shell|mextli I've some strange issue configuring geo-replication. I've read many docs and tried the georepsetup tool.
09:01 RedW joined #gluster
09:01 haomaiwang joined #gluster
09:02 shell|mextli I don't know why, but my dedicated server for the replication isn't used. It still uses root.
09:03 hgowtham joined #gluster
09:04 gem_ joined #gluster
09:04 fsimonce joined #gluster
09:05 Pupeno joined #gluster
09:05 shell|mextli I'm using version 3.5.2 Is this a problem?
09:07 dusmant joined #gluster
09:08 shell|mextli Ups, sorry. It's version 3.7.5
09:09 shell|mextli How can i force gluster geo-replication to use a specific user?
09:09 arcolife joined #gluster
09:10 overclk shell|mextli: you'd need to configure geo-rep to use mountbroker. aravindavk is pretty good at this...
09:11 archit_ joined #gluster
09:12 shell|mextli overclk: I know, its configured to use it. But the log still says the master trys to connect as root
09:13 overclk shell|mextli: it's most probably misconfigured then. I've lost touch in geo-rep tbh..
09:13 kovshenin joined #gluster
09:13 overclk shell|mextli: that's the reason I highlighted aravindavk :)
09:14 shell|mextli aravindavk: Any idea?
09:16 neha_ joined #gluster
09:18 overclk shell|mextli: mind providing geo-rep logs (master) ?
09:18 aravindavk shell|mextli: checking now. logs will help us in figuring out the issue
09:18 kovshenin joined #gluster
09:19 shell|mextli ok, one second
09:19 aravindavk shell|mextli: have you followed the steps as in https://github.com/aravindavk/georepsetup for Non root setup
09:20 glusterbot Title: aravindavk/georepsetup · GitHub (at github.com)
09:22 harish joined #gluster
09:22 shell|mextli aravindavk: yes, i did.
09:22 shell|mextli aravindavk: https://paste.pcspinnt.de/view/149ad318
09:22 glusterbot Title: Untitled - Paster pcpsinnt.de (at paste.pcspinnt.de)
09:23 shell|mextli It says: syncing: gluster://localhost:pimages -> ssh://root@www1.ambiendo.ovh:gluster://localhost:pimages
09:24 shell|mextli I've run georepsetup pimages geouser@www1.ambiendo.ovh pimages to create the georeplication
09:24 Slashman joined #gluster
09:32 [Enrico] joined #gluster
09:37 monotek joined #gluster
09:38 aravindavk shell|mextli: any session with root exists before creating non root session?
09:39 shell|mextli aravindavk: You mean georeplication? no.
09:41 stickyboy joined #gluster
09:41 Philambdo joined #gluster
09:41 shell|mextli aravindavk: ssh session as geouser works as expected.
09:41 shell|mextli so, i don't know why he wants to create a root session.
09:41 atalur joined #gluster
09:45 shell|mextli Hm, looks like georepsetup is broken.
09:45 shell|mextli There is the function ssh_initialize. This is called with slavehost, passwd. The initial connect is done with root
09:46 aravindavk shell|mextli: I think I found a bug in georepsetup tool. Will fix it. Now run following commands
09:47 aravindavk shell|mextli: gluster volume geo-replication pimages www1.ambiendo.ovh pimages stop
09:47 aravindavk shell|mextli: gluster volume geo-replication pimages www1.ambiendo.ovh pimages delete
09:47 aravindavk shell|mextli: gluster volume geo-replication pimages geouser@www1.ambiendo.ovh pimages create no-verify
09:47 shell|mextli aravindavk: line 215=
09:47 dusmant joined #gluster
09:47 ctria joined #gluster
09:47 rafi joined #gluster
09:47 aravindavk shell|mextli: I will fix the georepsetup tool issue now. Thanks for using it and reporting issue.
09:48 poornimag joined #gluster
09:49 gildub joined #gluster
09:50 shell|mextli aravindavk: jep, thats it.
09:56 aravindavk shell|mextli: working now?
09:56 shell|mextli aravindavk: yes. If i run it with the gluster command
09:57 aravindavk shell|mextli: sent patch to fix the issue. if you refresh georepsetup repo, you will get it. https://github.com/aravindavk/georepsetup/commit/68a953f365926b8a4c326348f0e29245d566b4af
09:57 glusterbot Title: Fixed Non root Session Creation · aravindavk/georepsetup@68a953f · GitHub (at github.com)
09:57 ivan_rossi @paste
09:57 glusterbot ivan_rossi: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
10:17 mhulsman joined #gluster
10:19 ws2k3 joined #gluster
10:24 DV joined #gluster
10:26 DV joined #gluster
10:29 shell|mextli joined #gluster
10:38 poornimag joined #gluster
10:44 kovshenin joined #gluster
10:47 lkoranda_ joined #gluster
10:48 marbu joined #gluster
10:48 jmarley joined #gluster
10:48 csaba1 joined #gluster
10:52 mbukatov joined #gluster
10:58 lkoranda joined #gluster
10:59 kovshenin joined #gluster
11:06 csaba1 joined #gluster
11:18 Slashman joined #gluster
11:20 GB21 joined #gluster
11:23 drankis joined #gluster
11:24 jwaibel joined #gluster
11:25 jwd_ joined #gluster
11:26 dusmant joined #gluster
11:27 jwd__ joined #gluster
11:27 Bhaskarakiran joined #gluster
11:36 firemanxbr joined #gluster
11:39 dusmant joined #gluster
11:44 nis joined #gluster
11:46 nis does anyone know if there  is a meaning to order bricks are listed when creating a new distributed volume ?? I wonder if glusterfs will distribute file on first and second host equally although number of writing processes are low
11:46 nis JoeJulian: are you here ?
11:49 vikki joined #gluster
11:51 jiffin nis: if u are creating replicated  volume, then order matters for replica pair
11:51 GB21 joined #gluster
11:52 aravindavk joined #gluster
11:52 jiffin nis: distribution algorithm works with hashing, there is no special importance for the order of bricks
11:56 bfoster joined #gluster
12:02 nis jiffin: so basically I can create a distributed volume and first list the bricks on node1 and after on node2?
12:02 overclk joined #gluster
12:03 jiffin nis: yes u can
12:03 jdarcy joined #gluster
12:04 ppai joined #gluster
12:05 creshal joined #gluster
12:06 pgreg joined #gluster
12:07 creshal Is there a useful documentation on setting up SSL with gluster? The official docs (http://gluster.readthedocs.org/en/latest/Administrator%20Guide/SSL/) don't work with 3.7.5 (ex., ssl.cert-depth option does not exist), and I can't seem to find a working CA configuration (shared CA certificate isn't accepted, concatenation of all server/client certificates isn't accepted either).
12:07 glusterbot Title: SSL - Gluster Docs (at gluster.readthedocs.org)
12:09 rjoseph joined #gluster
12:14 shell|mextli joined #gluster
12:16 itisravi creshal: I think https://kshlm.in/network-encryption-in-glusterfs/ is more recent.
12:16 glusterbot Title: Setting up network encryption in GlusterFS (at kshlm.in)
12:16 itisravi Never tried SSL myself though.
12:17 itisravi kshlm can confirm.
12:18 kshlm creshal, Check out my post on this topic. It's got more information on how to get started with different encryption setups.
12:18 kshlm Let me know if you have any troubles.
12:18 creshal Just reading it, thanks.
12:19 kshlm itisravi, Thanks for linking the blog post.
12:19 itisravi kshlm: np:), Maybe readthedocs can be updated with the same content.
12:26 poornimag joined #gluster
12:30 EinstCrazy joined #gluster
12:31 creshal Hit me, please.
12:31 creshal I regenerated the CA certificate at some point to switch from SHA1 to SHA256, and gave one server a certificate signed by the old CA and one by the new.
12:34 lpabon joined #gluster
12:34 creshal Well, it works now that I regenerated the other certificate, too.
12:34 kshlm joined #gluster
12:36 creshal Although one question remains: When would/would not I want to use management connection encryption? (i.e., set /var/lib/glusterd/secure-access) Any downsides of enabling it? What could an attacker/eavesdropper do if it's disabled?
12:39 kshlm creshal, Management encryption encrypts the connections between GlusterD's in the cluster.
12:39 jdarcy And, perhaps more importantly, authenticates.
12:42 haomaiwa_ joined #gluster
12:47 ninkotech joined #gluster
12:47 ninkotech_ joined #gluster
12:48 tomatto joined #gluster
12:48 nis jiffin: Thanks
13:01 creshal So… I'll generall want that in all cases where I also want encryption for the bricks?
13:01 haomaiwa_ joined #gluster
13:04 aravindavk joined #gluster
13:05 klaxa|work joined #gluster
13:05 amye joined #gluster
13:12 ashiq joined #gluster
13:17 rjoseph joined #gluster
13:19 kotreshhr joined #gluster
13:29 aravindavk joined #gluster
13:32 bennyturns joined #gluster
13:33 shaunm joined #gluster
13:38 unclemarc joined #gluster
13:41 B21956 joined #gluster
13:42 B21956 left #gluster
13:48 ppai joined #gluster
13:56 julim joined #gluster
13:57 shyam joined #gluster
14:00 muneerse joined #gluster
14:01 haomaiwang joined #gluster
14:04 harish_ joined #gluster
14:06 overclk joined #gluster
14:07 mhulsman joined #gluster
14:09 baoboa joined #gluster
14:11 shubhendu joined #gluster
14:11 wistof joined #gluster
14:13 kotreshhr left #gluster
14:17 jmarley joined #gluster
14:18 plarsen joined #gluster
14:18 shell|mextli Can anyone help me with this error:  https://paste.pcspinnt.de/view/183d1aa1
14:18 glusterbot Title: Untitled - Paster pcpsinnt.de (at paste.pcspinnt.de)
14:18 shell|mextli My geo-replication gets faulty
14:21 RameshN joined #gluster
14:21 Humble joined #gluster
14:23 glafouille joined #gluster
14:26 hagarth joined #gluster
14:28 pppp joined #gluster
14:31 overclk joined #gluster
14:32 kkeithley joined #gluster
14:33 ira joined #gluster
14:34 pppp joined #gluster
14:35 harish joined #gluster
14:37 skylar joined #gluster
14:41 amye joined #gluster
14:43 shell|mextli No one?
14:44 pppp joined #gluster
14:50 pppp joined #gluster
14:51 creshal I just started using gluster today, so… no idea. And I suspect all the Americans are still asleep.
14:52 klaxa|work >rsync error: error in rsync protocol data stream
14:52 klaxa|work sounds like incompatible rsync versions maybe?
14:55 Philambdo joined #gluster
14:55 shell|mextli klaxa|work: there are two slaves www1 and www2. Both have the same packages installed.
14:56 klaxa|work hmm ok
14:56 pppp joined #gluster
14:57 klaxa|work if you google rsync error code 12 you get some related stackoverflow posts
14:57 klaxa|work maybe one of those can help you?
15:00 creshal I'm more stumped by the "rsync> gsyncd sibling not found" line. That seems gluster specific and is the cause of rsync bailing in the first place. But I've no idea about geo replication apart from glossing over the manual yesterday…
15:01 dgandhi joined #gluster
15:01 7GHABKQND joined #gluster
15:02 dgandhi joined #gluster
15:03 dgandhi joined #gluster
15:03 bennyturns joined #gluster
15:04 jiffin joined #gluster
15:04 jiffin joined #gluster
15:05 dgandhi joined #gluster
15:06 ira joined #gluster
15:06 GB21 joined #gluster
15:07 dgandhi joined #gluster
15:09 dgandhi joined #gluster
15:10 dgandhi joined #gluster
15:10 rwheeler joined #gluster
15:11 dgandhi joined #gluster
15:14 Sadama joined #gluster
15:16 maserati joined #gluster
15:22 dgandhi joined #gluster
15:24 martineg_ joined #gluster
15:28 RameshN joined #gluster
15:31 dmnchild joined #gluster
15:32 zhangjn joined #gluster
15:34 ayma joined #gluster
15:35 shell|mextli creshal: didn't helped
15:35 shell|mextli but there is already an error report: https://bugzilla.redhat.com/show_bug.cgi?id=1044872
15:35 glusterbot Bug 1044872: high, medium, ---, rhs-bugs, CLOSED WORKSFORME, Dist-geo-rep : status keep going to faulty with rsync error  "rsync> inflate returned -3"
15:37 shell|mextli good ideas are welcome
15:38 stickyboy joined #gluster
15:40 kovshenin joined #gluster
15:51 overclk joined #gluster
15:52 kotreshhr joined #gluster
15:57 mhulsman joined #gluster
15:59 bowhunter joined #gluster
16:01 haomaiwang joined #gluster
16:04 julim joined #gluster
16:10 jtux left #gluster
16:14 primehaxor joined #gluster
16:15 squizzi_ joined #gluster
16:15 haomaiwang joined #gluster
16:17 dusmant joined #gluster
16:28 shubhendu_ joined #gluster
16:28 overclk joined #gluster
16:34 nishanth joined #gluster
16:39 cholcombe joined #gluster
16:39 squizzi_ joined #gluster
16:41 skoduri joined #gluster
16:42 F2Knight joined #gluster
16:47 RameshN_ joined #gluster
16:50 PhoenixSTF joined #gluster
16:50 PhoenixSTF hello
16:50 glusterbot PhoenixSTF: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
16:51 gem joined #gluster
16:52 PhoenixSTF ahh alright, so question is simple, what is the inconvenience to make a volume from folder rather then a mount? (like you have to use force on create)
16:56 vimal joined #gluster
16:58 JoeJulian @later tell shell|mextli Looks like the siblnig gsyncd is not running: https://github.com/gluster/glusterfs/blob/master/geo-replication/src/gsyncd.c#L267-L273
16:58 glusterbot JoeJulian: The operation succeeded.
16:58 JoeJulian PhoenixSTF: That is correct.
16:59 drankis joined #gluster
16:59 JoeJulian The reason for that is most of the time you're using large storage to create a cluster. If you don't have that storage mounted, something is probably wrong.
17:00 JoeJulian And using root for anything that can grow to fill it is bad practice.
17:01 haomaiwa_ joined #gluster
17:02 PhoenixSTF JoeJulian: thanks, but i do realize those kind of issues, altough it is only for qcow2 images, what I wanted to know is if it messes up anything with the rest of the files in the system while syncking that folder.
17:03 JoeJulian Nope, no other problems.
17:03 PhoenixSTF has I do not understand how the sync is made just worried it might write on a inode that belongs to something else
17:04 PhoenixSTF thanks JoeJulian, btw, loving gluster :)
17:05 JoeJulian No, all file operations (and they are all file operations) will happen within the subdirectory you specify for the brick.
17:05 JoeJulian Me too.
17:06 PhoenixSTF the thing I think it is the most fantastic is the self healing and the ability to sync folders that already have files like magic, it just freaking awesome
17:07 JoeJulian Don't count on that. All files should be created and updated through a volume mount. Changing the bricks is a recipe for disaster.
17:08 PhoenixSTF I dont change the bricks, just create a volume with data already in the partition with another empty brick
17:08 PhoenixSTF and it simply syncs
17:08 JoeJulian cool
17:09 PhoenixSTF tested with diferent files on each brick and it also synxs
17:09 JoeJulian Technically that's considered, "undefined behavior" but it's worked as long as I've been using it.
17:14 lpabon joined #gluster
17:16 PhoenixSTF well when you have something working on 2 or 3 systems and need to have more resilience with vm's, gluster with already working system altough it may not be consider good practice, it works and it takes a lot of pain
17:17 JoeJulian I agree. I'm just pointing out the developer's stance on that so you don't end up building all your automation around it.
17:18 RameshN__ joined #gluster
17:18 armyriad joined #gluster
17:20 PhoenixSTF oh no, has soon has I create a volume, just replaced the vm's disks through the gluster mount so changes will be ok
17:21 PhoenixSTF just hope that this behaviour stays in gluster because it a lot less hassle to use with vm disks then say drbd or maybe ceph
17:24 ctria joined #gluster
17:25 calavera joined #gluster
17:25 creshal Mhm. I've ripped out DRBD to replace it with gluster because how invasive the former is. DRBD problem? Your everything is f*cked and you can't access the data.
17:26 JoeJulian That's how I found gluster, too.
17:27 jwd joined #gluster
17:27 calavera_ joined #gluster
17:37 Pupeno joined #gluster
17:46 ghenry joined #gluster
17:46 ghenry joined #gluster
17:49 creshal http://www.postgresql.org/docs/9.4/static/creating-cluster.html#CREATING-CLUSTER-NFS Under what category falls fuse.glusterfs here? NFS-ish enough to make trouble?
17:49 glusterbot Title: PostgreSQL: Documentation: 9.4: Creating a Database Cluster (at www.postgresql.org)
17:49 PhoenixSTF creshal: well drbd at its time was the most viable solution, now not so much for the reason you said, it's inovation at it's best
17:49 PhoenixSTF thanks for the support guys
17:53 Rapture joined #gluster
17:54 shaunm joined #gluster
18:01 haomaiwang joined #gluster
18:02 Dragotha joined #gluster
18:07 ron-slc joined #gluster
18:14 aneale joined #gluster
18:17 ivan_rossi left #gluster
18:21 kovshenin joined #gluster
18:28 jwd joined #gluster
18:28 kotreshhr left #gluster
18:38 jwaibel joined #gluster
18:39 jiffin joined #gluster
18:42 ira joined #gluster
18:43 ira_ joined #gluster
18:48 shaunm joined #gluster
18:53 amye joined #gluster
18:58 bowhunter joined #gluster
18:58 a2 joined #gluster
18:59 bennyturns joined #gluster
19:01 haomaiwa_ joined #gluster
19:03 side_control joined #gluster
19:14 Pupeno joined #gluster
19:29 Gill_ joined #gluster
19:32 skylar joined #gluster
19:39 skylar joined #gluster
19:43 skylar1 joined #gluster
19:55 David_Vargese joined #gluster
20:01 haomaiwa_ joined #gluster
20:03 timotheus1 joined #gluster
20:10 mhulsman joined #gluster
20:13 F2Knight joined #gluster
20:20 jwd joined #gluster
20:22 rwheeler joined #gluster
20:58 shaunm joined #gluster
21:01 haomaiwang joined #gluster
21:02 tomatto joined #gluster
21:12 timotheus1 joined #gluster
21:12 timotheus1 joined #gluster
21:21 calavera joined #gluster
21:26 timotheus1 joined #gluster
21:29 jdarcy joined #gluster
21:31 tomatto joined #gluster
21:37 haomaiwa_ joined #gluster
21:41 stickyboy joined #gluster
21:47 skylar joined #gluster
22:01 haomaiwa_ joined #gluster
22:04 gildub joined #gluster
22:04 wistof hi
22:04 glusterbot wistof: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
22:06 wistof i'm new on gluster, and i'm running glusterfs 3.27
22:07 wistof i've try  to a new peer, which is already know , but with an other  ip
22:07 wistof the new peer have an uid  00000000-0000-0000-0000-000000000000
22:08 wistof i try to remove it by his name, but ii remove the other one
22:08 wistof now, i understand that a peer should be unique
22:08 wistof how can i revert my config ?
22:11 bowhunter joined #gluster
22:14 dlambrig_ joined #gluster
22:15 JoeJulian ~latest | wistof
22:15 glusterbot wistof: The latest version is available at http://download.gluster.org/pub/gluster/glusterfs/LATEST/ . There is a .repo file for yum or see @ppa for ubuntu.
22:16 JoeJulian First of all, start with a current version. 3.2 is ancient.
22:16 wistof JoeJulian : i know, but it's prod, so i can't upgrade now
22:20 wistof i try to recreate the peer with the config file, but doesn't works
22:21 tomatto joined #gluster
22:32 squizzi joined #gluster
22:32 JoeJulian wistof: The uuid should be in /var/lib/glusterd/glusterd.info . IIRC, all 0's happens when there's an error during peering. Check the logs.
22:35 wistof JoeJulian: Thanks. My fiiles are in /etc/glusterd. I've the UUID, but i can't recreate the config file by hand
22:35 wistof i try to check the log files
22:35 Vaizki_ joined #gluster
22:37 JoeJulian My personal view of the "it's in production so I cannot upgrade" philosophy is that if it's in production, why would you want to ensure you are absolutely running a version with known, often critical, bugs?
22:39 wistof i know, but now, i just want come back to a working configuration
22:39 wistof and after, take a look to the migration
22:46 Vaizki_ I am considering GlusterFS for a series of 2-node "clusters" which will run both failover services (with pacemaker/corosync) as well as a few services on both nodes which need access to a shared log file fs
22:46 Vaizki_ is it a bad idea to run applications on the same nodes which run gluster server?
22:51 chirino joined #gluster
22:53 JoeJulian Vaizki_: No, it's fine.
22:54 Vaizki_ ok great.. what about worst case scenarios for gluster, what kind of disk access is the worst for it?
22:56 Vaizki_ I actually never expect 2 nodes to access the same file simultaneously for writing. For reading yes and for writing to the same directory yes.
23:01 haomaiwang joined #gluster
23:10 wistof does glusterfs keep his configuration on memory ? how can we explain that it won't read my new config file
23:14 JoeJulian wistof: A running glusterd does keep it's configuration in memory. You would have to stop glusterd, make changes, then start glusterd to have it read them.
23:16 wistof JoeJulian : ok, thanks.  I will try to explain my setup, and my  mistake
23:17 wistof i've 2 servers on 2 datacenters, running glusterfs. and one replicate volume
23:17 wistof the peer was talking a private network, with an openvpn
23:18 wistof we notice some latency, and i try to create a new test volume, but using the public adresses
23:19 bennyturns joined #gluster
23:20 squizzi joined #gluster
23:20 mlncn joined #gluster
23:22 wistof so,  when i probe my new peer on the public ip, in fact, it found a peer whick is already know, but it add it, with a zero uuid
23:23 wistof so, like my volume config is still ok, and running, if i just recreate the peer config file on the both node, and restart glusterfs, it should works
23:25 JoeJulian Interesting. That's fixed in newer versions. ;)
23:26 wistof :)
23:26 wistof i'm running under debian wheezy, i just need to upgrade with apt ?
23:27 JoeJulian @ppa
23:27 glusterbot JoeJulian: The official glusterfs packages for Ubuntu are available here: 3.5: http://goo.gl/6HBwKh 3.6: http://goo.gl/XyYImN 3.7: https://goo.gl/aAJEN5 -- See more PPAs for QEMU with GlusterFS support, and GlusterFS QA releases at https://launchpad.net/~gluster -- contact semiosis with feedback
23:27 wistof thanks
23:30 wistof JoeJulian : it should works if i do like i say ?
23:30 JoeJulian I don't use ubuntu.
23:31 wistof i talk about modify config file and restart glusterd
23:39 zhangjn joined #gluster
23:46 P0w3r3d joined #gluster
23:53 mlncn joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary