Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:34 Trivium joined #gluster
00:34 Trivium Hey folks, just wondering if gluster is an appropiate solution to my problem
00:36 JoeJulian Trivium: it is.
00:36 Trivium I have two servers (theoretically) in geographically disparate locations. I want to mirror the data on both servers.
00:36 JoeJulian From one to the other, or bidirectionally?
00:37 Trivium I also want to have an Active/Active setting where data can be pushed to either (either via nfs mounting or rsync or scp) and later found on the other server.
00:38 JoeJulian And you would like it to be performant? or can it be slow as molasses when opening files?
00:38 Trivium Can't be amazingly slow when writing or reading.
00:38 JoeJulian ... and I bet you'd like it to come with rainbows and unicorns. ;)
00:38 Trivium Currently if I can't find a solution I'll fall back to rsync(1) over the weekends.
00:39 Trivium Naw, I make my own rainbows.
00:39 Trivium I'm basically looking to syncronize two directories. They don't need to be written that very same minute.
00:39 JoeJulian No, there's no such thing. The closest you'll come is one of the inotify+rsync tools.
00:39 Trivium Okay, thank you.
00:46 David_Varghese joined #gluster
00:50 nangthang joined #gluster
00:57 plarsen joined #gluster
00:59 EinstCrazy joined #gluster
01:06 shyam joined #gluster
01:13 vimal joined #gluster
01:16 daMaestro joined #gluster
01:27 gildub joined #gluster
01:29 Lee1092 joined #gluster
01:35 ramky joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 harish_ joined #gluster
01:55 haomaiwa_ joined #gluster
02:01 daMaestro joined #gluster
02:01 haomaiwa_ joined #gluster
02:13 maveric_amitc_ joined #gluster
02:15 rafi joined #gluster
02:17 nangthang joined #gluster
02:18 David_Varghese joined #gluster
02:22 vmallika joined #gluster
02:52 hchiramm_home joined #gluster
03:01 haomaiwa_ joined #gluster
03:08 kdhananjay joined #gluster
03:09 vimal joined #gluster
03:19 kdhananjay1 joined #gluster
03:24 hchiramm_home joined #gluster
03:24 lpabon joined #gluster
03:25 cholcombe joined #gluster
03:25 nishanth joined #gluster
03:31 [7] joined #gluster
03:38 atinm joined #gluster
03:39 stickyboy joined #gluster
03:41 ramteid joined #gluster
03:45 haomaiwa_ joined #gluster
03:50 bharata-rao joined #gluster
03:50 rjoseph joined #gluster
04:01 17SADSVQL joined #gluster
04:03 gem joined #gluster
04:03 zhangjn joined #gluster
04:04 shubhendu joined #gluster
04:06 opal joined #gluster
04:07 arcolife joined #gluster
04:10 itisravi joined #gluster
04:12 ppai joined #gluster
04:15 opal left #gluster
04:17 rafi joined #gluster
04:17 spalai joined #gluster
04:20 nbalacha joined #gluster
04:22 kanagaraj joined #gluster
04:23 spalai left #gluster
04:25 sakshi joined #gluster
04:26 kotreshhr joined #gluster
04:27 Trivium Turns out csync2 might be able to do it....
04:28 neha_ joined #gluster
04:37 ramky joined #gluster
04:38 yosafbridge joined #gluster
04:39 beeradb__ joined #gluster
04:39 clutchk1 joined #gluster
04:43 TheSeven joined #gluster
04:44 RameshN joined #gluster
04:46 pppp joined #gluster
04:47 skoduri joined #gluster
04:57 aravindavk joined #gluster
05:01 haomaiwa_ joined #gluster
05:20 ashiq joined #gluster
05:25 poornimag joined #gluster
05:26 hgowtham joined #gluster
05:27 jiffin joined #gluster
05:28 yazhini joined #gluster
05:30 ndarshan joined #gluster
05:34 neha_ joined #gluster
05:34 rafi joined #gluster
05:39 Bhaskarakiran joined #gluster
05:44 Manikandan joined #gluster
05:59 hagarth joined #gluster
06:01 haomaiwa_ joined #gluster
06:02 skoduri joined #gluster
06:12 cabillman joined #gluster
06:13 haomaiwa_ joined #gluster
06:15 Rapture joined #gluster
06:16 mhulsman joined #gluster
06:18 mjrosenb what are subvolumes?
06:23 chirino_m joined #gluster
06:24 jiffin mjrosenb: http://gluster.readthedocs.org/en/latest/Administrator%20Guide/glossary/
06:24 glusterbot Title: Glossary - Gluster Docs (at gluster.readthedocs.org)
06:26 l0uis joined #gluster
06:27 kotreshhr joined #gluster
06:27 ju5t joined #gluster
06:28 squaly joined #gluster
06:28 kdhananjay joined #gluster
06:29 fsimonce joined #gluster
06:29 karnan joined #gluster
06:30 mjrosenb jiffin: thanks.
06:31 mjrosenb well, that isn't confusing at all
06:34 jtux joined #gluster
06:47 Saravana_ joined #gluster
06:47 spalai joined #gluster
06:47 atalur_ joined #gluster
06:48 64MAD190K joined #gluster
06:53 vmallika joined #gluster
06:54 atalur joined #gluster
06:57 haomaiwang joined #gluster
06:59 LebedevRI joined #gluster
07:01 sakshi joined #gluster
07:02 haomaiwa_ joined #gluster
07:04 rastar joined #gluster
07:06 skoduri joined #gluster
07:09 dusmant joined #gluster
07:16 hagarth joined #gluster
07:18 maveric_amitc_ joined #gluster
07:19 Pupeno joined #gluster
07:23 LebedevRI joined #gluster
07:25 ivan_rossi joined #gluster
07:33 David_Varghese joined #gluster
07:36 deniszh joined #gluster
07:42 Philambdo joined #gluster
07:43 vikki joined #gluster
07:45 ctria joined #gluster
07:54 [Enrico] joined #gluster
07:56 skoduri joined #gluster
07:56 najib joined #gluster
08:01 haomaiwa_ joined #gluster
08:04 spalai joined #gluster
08:22 ramteid joined #gluster
08:28 ppai joined #gluster
08:30 Norky joined #gluster
08:37 Philambdo joined #gluster
08:47 karnan joined #gluster
08:54 Slashman joined #gluster
08:54 karnan joined #gluster
08:56 drankis joined #gluster
09:00 deniszh left #gluster
09:01 haomaiwa_ joined #gluster
09:10 ws2k3 joined #gluster
09:11 RayTrace_ joined #gluster
09:12 Saravana_ joined #gluster
09:15 spalai joined #gluster
09:17 frozengeek joined #gluster
09:21 jiffin1 joined #gluster
09:23 atalur joined #gluster
09:35 poornimag joined #gluster
09:35 raghu joined #gluster
09:39 stickyboy joined #gluster
09:49 David-Varghese joined #gluster
09:59 klaxa joined #gluster
09:59 asdf3 joined #gluster
10:00 asdf3 How come http://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/ is empty but the previous version http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.4/Debian/ isn't?
10:00 glusterbot Title: Index of /pub/gluster/glusterfs/LATEST/Debian (at download.gluster.org)
10:01 asdf3 (I have 'deb http://download.gluster.org/pub/gluster/glusterfs/LATEST/Debian/jessie/apt jessie main' in my /etc/apt/sources.list.d/gluster.list)
10:01 haomaiwa_ joined #gluster
10:08 ju5t joined #gluster
10:15 thoht any reason why Debian jessie is install gluster 3.5 and not 3.7 ?
10:19 David_Varghese joined #gluster
10:21 jiffin1 joined #gluster
10:29 poornimag joined #gluster
10:29 abyss_ thoht: because debian freeze his stable version before 3.7 has been released?
10:30 abyss^ *freezed
10:30 haomaiwa_ joined #gluster
10:32 jwd joined #gluster
10:32 thoht abyss^: is it safe to use http:///download.gluster.org/pub/gluster/glusterfs/3.7/3.7.2/Debian/jessie/apt as repo ?
10:33 thoht it is at least 3.7 branch
10:34 ivan_rossi abyss^: notice that you can get 3.7.4.from debian-backports on any official debian repo
10:34 thoht ivan_rossi: how do i found debian-backports ?.
10:34 ivan_rossi https://packages.debian.org/jessie-backports/glusterfs-client
10:34 glusterbot Title: Debian -- Details of package glusterfs-client in jessie-backports (at packages.debian.org)
10:35 ivan_rossi in your sources.list or inside sources.list.d:
10:35 thoht deb http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.2/Debian/jessie/apt jessie main
10:35 glusterbot Title: Index of /pub/gluster/glusterfs/3.7/3.7.2/Debian/jessie/apt (at download.gluster.org)
10:35 thoht deb ttps://packages.debian.org/jessie-backports/glusterfs-client jessie main
10:35 thoht like thtat ?
10:36 ivan_rossi deb http://httpredir.debian.org/debian/ jessie-backports main contrib non-free
10:36 glusterbot Title: Index of / (at httpredir.debian.org)
10:36 thoht and i comment the first one; right ?
10:38 thoht ok
10:38 ivan_rossi if the repo are done properly you can even have both. but i would not mix. Either the official one or the gluster.org
10:38 thoht i  glusterfs-server               3.7.4-1~bpo8+1
10:38 thoht i got it now
10:38 thoht i kept only deb http://httpredir.debian.org/debian/ jessie-backports main contrib non-free
10:38 glusterbot Title: Index of /debian/ (at httpredir.debian.org)
10:38 thoht thanks ivan_rossi
10:39 thoht is it safe to run glusterfs on top of ZFS on a Debian ?.
10:39 ivan_rossi you're welcome
10:39 thoht (using ZOL)
10:39 ivan_rossi never tried. i tend to think of ZFS on linux as still a little too "experimental"
10:40 mufa joined #gluster
10:40 thoht ivan_rossi: so xfs is best shot at this moment
10:41 thoht i had a gluster volume on 2 centos7 using ext4
10:41 thoht i just trashed 1 node and reinstalled it on Debian8
10:41 thoht i was guessing if i can remove the brick of the existant node and replace it by the new debian freshly installed
10:42 ivan_rossi i use xfs on debian everywhere since before redhat supported it in rhel. i never use ext4.
10:42 thoht will it be compatible to have 2 bricks using ext4 on one side and xfs on the other one ?
10:42 thoht at least temporarly
10:43 thoht when synchro done; i will trash the last centos; and reinstall it to debian 8 too
10:44 ivan_rossi i think so. as long as you support xattrs gluster should be filesystem agnostic. if i understood correctly. if not, gluster deities, please correct me ;-)
10:44 thoht great
10:48 thoht on the new debian8, i modified glusterd.info to keep the previous one when it was a centos in sort that the first node still identify it with same ID
10:48 thoht but now; how to synchrionize ?
10:48 thoht the new node is saying "no volume present"
10:51 atalur joined #gluster
11:01 haomaiwa_ joined #gluster
11:05 ju5t joined #gluster
11:05 thoht ok it retrieved the volume info now
11:05 thoht but when running  a heal; i got Self-heal daemon is not running. Check self-heal daemon log file
11:10 thoht ok gluster volume sync ovirt01 all did the work !!
11:12 skoduri joined #gluster
11:15 thoht i love it :)
11:20 yazhini joined #gluster
11:27 bluenemo joined #gluster
11:32 spalai joined #gluster
11:37 plarsen joined #gluster
11:37 Bhaskarakiran joined #gluster
11:48 hchiramm_home joined #gluster
11:51 spcmastertim joined #gluster
11:52 _shaps_ joined #gluster
11:52 haomaiwa_ joined #gluster
11:57 DV joined #gluster
11:58 Saravana_ joined #gluster
12:01 haomaiwang joined #gluster
12:08 rafi joined #gluster
12:14 LebedevRI joined #gluster
12:17 poornimag joined #gluster
12:17 haomaiwa_ joined #gluster
12:18 skoduri joined #gluster
12:22 ira joined #gluster
12:22 neha_ joined #gluster
12:35 ppai joined #gluster
12:37 the-me joined #gluster
12:39 maveric_amitc_ joined #gluster
12:39 zhangjn joined #gluster
12:40 zhangjn joined #gluster
12:40 EinstCrazy joined #gluster
12:41 zhangjn joined #gluster
12:42 zhangjn joined #gluster
12:43 taolei joined #gluster
12:43 zhangjn joined #gluster
12:43 hagarth joined #gluster
12:43 spalai left #gluster
12:48 ju5t joined #gluster
12:50 shaunm joined #gluster
12:51 sakshi joined #gluster
12:56 unclemarc joined #gluster
13:03 haomaiwa_ joined #gluster
13:07 vimal joined #gluster
13:12 bennyturns joined #gluster
13:15 julim joined #gluster
13:16 mpietersen joined #gluster
13:16 poornimag joined #gluster
13:16 shyam joined #gluster
13:23 mhulsman joined #gluster
13:24 skylar joined #gluster
13:25 ira joined #gluster
13:26 julim joined #gluster
13:40 dgandhi joined #gluster
13:48 ghenry joined #gluster
13:48 ghenry joined #gluster
13:50 taolei I have a distributed volume formed by 4 nodes. When one node goes down, the volume becomes read-only, and I manually remove the broken brick from the volume, which after becomes read-write again, with total capacity reducing to 3/4. After the broken node recovered, I re-add the brick back to the volume, with no error reported, and 'gluster volume status' shows all 4 bricks are Online. But the
13:50 taolei volume's total capacity doesn't grow, and no file be distributed to the re-added brick. Have I missed some key point?
13:56 kovshenin joined #gluster
13:57 ju5t joined #gluster
13:59 hamiller joined #gluster
14:01 haomaiwang joined #gluster
14:04 frozengeek joined #gluster
14:05 taolei left #gluster
14:07 julim joined #gluster
14:15 RayTrace_ joined #gluster
14:21 haomaiwa_ joined #gluster
14:27 vimal joined #gluster
14:28 Philambdo joined #gluster
14:33 atinm joined #gluster
14:34 bennyturns joined #gluster
14:37 Philambdo joined #gluster
14:49 deni Hi
14:49 glusterbot deni: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
14:49 deni I'm getting this error: [2015-10-13 14:44:19.000895] E [rpcsvc.c:617:rpcsvc_handle_rpc_call] 0-rpc-service: Request received from non-privileged port. Failing request
14:50 deni I have option rpc-auth-allow-insecure on in my /etc/glusterfs/glusterd.vol file
14:50 deni (I'm mentioning this because googling only led me to posts saying I should allow that option)
14:51 deni I sould mention that this a HA setup with 3 replica's
14:53 deni here's some more output from libgfapi: http://dpaste.com/06KPNY3
14:53 glusterbot Title: dpaste: 06KPNY3 (at dpaste.com)
14:54 skoduri joined #gluster
14:56 ibotty joined #gluster
14:58 ibotty Hi, I have a q re snapshots. Is it ok to have one thin pool that has multiple volumes, each used as a brick in different gluster volumes?
14:59 ibotty i.e.: one thin pool `thin_pool`, containing g_vol1, .. g_volN with g_vol$i part of gluster vol $i
14:59 ibotty will that work?
14:59 ibotty thank you in advance for helping out ;)
15:01 JoeJulian deni: You also have to allow insecure on the volumes. see "gluster volume set help"
15:01 haomaiwa_ joined #gluster
15:01 deni JoeJulian: I just did this: gluster volume set gv auth.allow '*' on each of the 3 nodes
15:01 JoeJulian ibotty: I can't think of any reason it wouldn't work.
15:01 deni do I need to "restart the volume" ?
15:01 deni if there is such a thing
15:01 JoeJulian No, auth.allow is not allow_insecure
15:02 maserati joined #gluster
15:02 JoeJulian I'm not at my desk at the moment, so I can't quickly get to that help text.
15:03 deni JoeJulian: I'm trying gluster volume set gv allow-insecure on now
15:03 deni will see if it works
15:03 JoeJulian gluster volume set help | grep insecure
15:04 _maserati_ joined #gluster
15:06 asijdoasijdaios joined #gluster
15:07 ibotty k thanks JoeJulian, I was not sure because the docs are not crystal clear on that part
15:07 ibotty "Since Gluster volume snapshot is based out of LVM snapshot, each Gluster  volume brick should be mapped to an independent thinly provisioned LVM.  Also this thinly provisioned LVM should not be used for any other  purpose."
15:08 ibotty I was not sure if they also meant the pod.
15:08 ibotty thanks for clarifying
15:08 ibotty :)
15:08 asdaisjdijd joined #gluster
15:12 atalur joined #gluster
15:19 wushudoin joined #gluster
15:19 ju5t joined #gluster
15:31 monotek1 joined #gluster
15:36 Bhaskarakiran joined #gluster
15:37 rafi joined #gluster
15:40 stickyboy joined #gluster
15:44 atinmu joined #gluster
15:56 kovshenin joined #gluster
16:01 haomaiwa_ joined #gluster
16:05 RayTrace_ joined #gluster
16:09 pdrakeweb joined #gluster
16:12 justinmburrous joined #gluster
16:28 skoduri joined #gluster
16:33 Manikandan joined #gluster
16:50 Manikandan_ joined #gluster
16:51 kotreshhr joined #gluster
16:57 Rapture joined #gluster
17:01 haomaiwa_ joined #gluster
17:06 ivan_rossi left #gluster
17:06 lbarfiel1 Can someone help me with a geo-rep issue?  Had a network drop, and now the geo-rep state is "faulty", with tons of "File Exists" and "Operation not permitted" errors in the slave logs.
17:09 shubhendu joined #gluster
17:12 primehaxor there is some options for small files ? i have about 700GB of static files and i would like to increase the write perfomance on this volume, the write perfomance is so poor, im using distributed volume
17:13 poornimag joined #gluster
17:14 Leildin primehaxor, you should look into volume options that can be tweak to help a bit. I know we changed stuff and got slightly better performance
17:14 Leildin how do you access the files ?
17:14 primehaxor Leildin via fuse
17:14 primehaxor and render via webserver
17:15 Leildin ok, we had issues using samba but solved them getting fuse instead (migration to linux <3)
17:24 primehaxor ty =]
17:32 jiffin joined #gluster
17:33 kovshenin joined #gluster
17:34 kovshenin joined #gluster
17:41 lbarfield No one has any experience with geo-rep "file exists" issues?
17:53 dlambrig_ joined #gluster
18:01 haomaiwa_ joined #gluster
18:03 dlambrig_ joined #gluster
18:09 pdrakeweb joined #gluster
18:26 ayma joined #gluster
18:29 Gu_______ joined #gluster
18:31 poornimag joined #gluster
18:36 prg3 joined #gluster
18:43 jdarcy joined #gluster
18:48 kovshenin joined #gluster
18:49 kovshenin joined #gluster
18:50 lalatenduM joined #gluster
18:51 Gue______ joined #gluster
18:54 haomaiwa_ joined #gluster
18:56 Gue______ joined #gluster
18:57 shyam joined #gluster
18:58 Gue______ joined #gluster
19:12 nzero joined #gluster
19:15 chr1st1an joined #gluster
19:16 nzero i'm getting a weird issues using nfs to access gluster. not using any replication, just a single copy. but when a client using nfs tries to create a file it says "file not found", that client can see all the other files in a directory. when using gluster volume i am able to create the files just fine. in fact, the problematic client can write to a tmp folder using nfs and gluster, so it is just affecting some folder/filenames
19:16 nzero through nfs. gluster volume status looks fine. any suggestions on where to start looking?
19:17 ndk joined #gluster
19:19 portante joined #gluster
19:22 a_ta joined #gluster
19:25 mhulsman joined #gluster
19:28 Chr1st1an_ joined #gluster
19:34 rafi joined #gluster
19:38 haomaiwa_ joined #gluster
19:41 nzero just realized that the server that is able to make the files is actually using nfs to access the files, too
19:52 Chr1st1an joined #gluster
20:01 Chr1st1an joined #gluster
20:02 Chr1st1an joined #gluster
20:03 mpietersen joined #gluster
20:04 Chr1st1an joined #gluster
20:06 shyam joined #gluster
20:07 Chr1st1an joined #gluster
20:10 Pupeno joined #gluster
20:10 maserati joined #gluster
20:11 togdon joined #gluster
20:32 Chr1st1an joined #gluster
20:33 kovshenin joined #gluster
20:36 Chr1st1an joined #gluster
20:41 haomaiwa_ joined #gluster
20:42 Pupeno joined #gluster
20:46 hagarth joined #gluster
20:46 Chr1st1an joined #gluster
20:47 Chr1st1an joined #gluster
20:49 cyberbootje joined #gluster
20:55 plarsen joined #gluster
21:02 mhulsman joined #gluster
21:18 gildub joined #gluster
21:24 theron joined #gluster
21:26 Pupeno joined #gluster
21:26 Pupeno joined #gluster
21:30 Chr1st1an joined #gluster
21:31 Chr1st1an joined #gluster
21:39 stickyboy joined #gluster
21:49 deniszh joined #gluster
21:50 Rapture joined #gluster
22:07 DV joined #gluster
22:44 haomaiwa_ joined #gluster
22:51 badone joined #gluster
22:51 kovshenin joined #gluster
22:52 sysconfig joined #gluster
23:01 shyam joined #gluster
23:05 nzero joined #gluster
23:14 kovshenin joined #gluster
23:18 shaunm joined #gluster
23:39 zhangjn joined #gluster
23:40 zhangjn joined #gluster
23:45 squaly joined #gluster
23:46 haomaiwa_ joined #gluster
23:55 Rapture joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary