Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-06-13

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 rokn joined #gluster
00:17 rokn I have a cloud based cluster and it's getting low on space. Does anyone know how gluster will behave if i grow the underlying xfs filesystem?
00:20 rokn to clarify, it is a replica cluster.
00:46 armyriad joined #gluster
00:48 haomaiwang joined #gluster
01:01 v12aml joined #gluster
01:29 haomaiwang joined #gluster
01:32 amye joined #gluster
01:32 amye_ joined #gluster
01:49 alghost left #gluster
01:49 alghost joined #gluster
01:52 Rusty joined #gluster
01:52 RustyB joined #gluster
01:58 Lee1092 joined #gluster
02:18 RustyB hey folks. i don't know if anyone has run into a similar problem. I have coreos installed on one machine, and gluster installed on another machine (cluster of 1 until i build the new boxes). however, i am having crazy issues with nfs randomly hanging between coreos and glusterfs
02:19 RustyB unfortunately i am stuck mounting via NFS because of the lack of gluster support in coreos. i tried changing up the nfs mount options, but no luck. additionally, i was doing to try to mount as nfs4 to see if that changed anything, but apparently gluster only supports v3 :-/
02:37 hagarth joined #gluster
02:51 harish joined #gluster
03:00 Bhaskarakiran joined #gluster
03:04 RustyB it looks like some of this may be solved with nfs-ganesha, but the roadmap says 3.8 is due first quarter this year. any idea when it might be?
03:05 Gambit15 joined #gluster
03:08 haomaiwang joined #gluster
03:11 msvbhat_ joined #gluster
03:11 amye RustyB: the community page? That needs to be updated, but you can expect 3.8 in the next few weeks. :)
03:11 ramteid joined #gluster
03:12 RustyB awesome
03:12 RustyB any ideas if it might fix my nfs hanging issue :p
03:12 amye We're at RC2 as of May 25
03:12 amye That, I can't speak to. Sorrrrry.
03:12 RustyB (prays for coreos to just include gluster support)
03:12 RustyB hah no worries. thanks amye :)
03:13 amye np. just happened to be lurking around. :)
03:21 Bhaskarakiran joined #gluster
03:31 Bhaskarakiran joined #gluster
03:42 kdhananjay joined #gluster
03:46 Saravanakmr joined #gluster
03:55 poornimag joined #gluster
04:01 kramdoss_ joined #gluster
04:01 itisravi joined #gluster
04:03 nbalacha joined #gluster
04:05 gem joined #gluster
04:07 overclk joined #gluster
04:09 RameshN joined #gluster
04:09 ramky joined #gluster
04:10 hgowtham joined #gluster
04:14 msvbhat_ joined #gluster
04:16 atinm joined #gluster
04:28 raghug joined #gluster
04:31 Bhaskarakiran joined #gluster
04:34 nehar joined #gluster
04:38 harish_ joined #gluster
04:47 gowtham joined #gluster
04:54 aspandey joined #gluster
04:55 aravindavk joined #gluster
04:57 ndarshan joined #gluster
05:02 Gnomethrower joined #gluster
05:04 satya4ever_ joined #gluster
05:06 zerick_ joined #gluster
05:07 nehar joined #gluster
05:07 prasanth joined #gluster
05:11 ppai joined #gluster
05:12 hgowtham joined #gluster
05:18 zerick_ joined #gluster
05:21 bb0x joined #gluster
05:30 Manikandan joined #gluster
05:39 Apeksha joined #gluster
05:39 skoduri joined #gluster
05:40 Apeksha_ joined #gluster
05:42 nishanth joined #gluster
05:57 kotreshhr joined #gluster
06:06 karthik___ joined #gluster
06:06 ashiq joined #gluster
06:06 hgowtham_ joined #gluster
06:10 jtux joined #gluster
06:16 kshlm joined #gluster
06:24 aspandey joined #gluster
06:29 harish_ joined #gluster
06:29 karnan joined #gluster
06:30 [Enrico] joined #gluster
06:36 kdhananjay joined #gluster
06:36 zerick_ joined #gluster
06:36 itisravi joined #gluster
06:38 rafi joined #gluster
06:39 karnan joined #gluster
06:50 RameshN joined #gluster
06:51 karnan_ joined #gluster
06:52 k4n0 joined #gluster
07:05 jri joined #gluster
07:07 anil_ joined #gluster
07:07 pur__ joined #gluster
07:14 hackman joined #gluster
07:17 fsimonce joined #gluster
07:24 overclk_ joined #gluster
07:28 hgowtham joined #gluster
07:29 ivan_rossi joined #gluster
07:32 itisravi_ joined #gluster
07:33 mbukatov joined #gluster
07:42 kdhananjay joined #gluster
07:46 overclk joined #gluster
07:55 wnlx joined #gluster
07:56 jiffin joined #gluster
07:57 shubhendu joined #gluster
07:58 ahino joined #gluster
07:58 Slashman joined #gluster
08:00 deniszh joined #gluster
08:00 ashiq joined #gluster
08:08 gowtham joined #gluster
08:10 karthik___ joined #gluster
08:11 RameshN joined #gluster
08:14 arif-ali joined #gluster
08:17 arif-ali_ joined #gluster
08:18 ashiq joined #gluster
08:20 kovshenin joined #gluster
08:25 ghenry joined #gluster
08:31 overclk joined #gluster
08:31 arif-ali_ joined #gluster
08:36 itisravi joined #gluster
08:45 bb0x joined #gluster
08:45 kdhananjay joined #gluster
08:50 karnan joined #gluster
08:52 atalur joined #gluster
08:58 arif-ali_ joined #gluster
08:58 arcolife joined #gluster
09:01 kdhananjay joined #gluster
09:01 itisravi_ joined #gluster
09:02 itisravi_ joined #gluster
09:06 itisravi joined #gluster
09:09 hgowtham joined #gluster
09:17 atalur joined #gluster
09:24 bb0x joined #gluster
09:30 ppai joined #gluster
09:38 muneerse2 joined #gluster
09:43 anil_ joined #gluster
09:47 nishanth joined #gluster
09:48 atinm joined #gluster
09:51 DV joined #gluster
09:55 atalur joined #gluster
09:56 kokopelli joined #gluster
09:59 kokopelli hello, I'm having a problem with healing. I'd 2 nodes and I've added one more node. Last node is having difference size. I run gluster vol vol_name heal full , now diff is 65G, what can i do ?
10:00 shubhendu joined #gluster
10:03 ppai joined #gluster
10:12 arif-ali__ joined #gluster
10:18 anil_ joined #gluster
10:19 arif-ali_ joined #gluster
10:22 DV joined #gluster
10:23 ppai joined #gluster
10:25 DV joined #gluster
10:30 robb_nl joined #gluster
10:35 raghug joined #gluster
10:37 nehar joined #gluster
10:39 ppai joined #gluster
10:40 anil_ joined #gluster
10:45 kxseven joined #gluster
10:49 arif-ali_ joined #gluster
10:52 d0nn1e joined #gluster
10:59 atinm joined #gluster
10:59 nishanth joined #gluster
11:00 bfoster joined #gluster
11:02 skoduri_ joined #gluster
11:17 RameshN joined #gluster
11:24 overclk joined #gluster
11:31 nottc joined #gluster
11:39 ppai joined #gluster
11:42 Gambit15 joined #gluster
11:44 DV joined #gluster
11:49 aravindavk joined #gluster
11:53 Manikandan joined #gluster
11:56 Bhaskarakiran joined #gluster
12:01 karthik___ joined #gluster
12:02 aravindavk joined #gluster
12:15 abhiqqqq joined #gluster
12:16 abhiqqqq Hi
12:16 glusterbot abhiqqqq: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
12:16 abhiqqqq I have problem where owner of the files/folder are getting changed automatically & it is random.
12:17 abhiqqqq I have using glusterfs 3.6.9 both server & client on ubuntu 14.04
12:20 abhiqqqq How can i fix the issue
12:22 RameshN joined #gluster
12:34 atinm joined #gluster
12:36 ben453 joined #gluster
12:42 julim joined #gluster
12:45 ira joined #gluster
12:49 arif-ali joined #gluster
12:53 overclk joined #gluster
12:53 plarsen joined #gluster
13:01 rwheeler joined #gluster
13:08 ank joined #gluster
13:12 ivan_rossi left #gluster
13:12 hgowtham joined #gluster
13:16 atinm joined #gluster
13:25 dblack joined #gluster
13:26 Debloper joined #gluster
13:35 robb_nl joined #gluster
13:40 Gambit15 abhiqqqq, are the UIDs & GIDs changing?
13:42 abhiqqqq yes
13:45 skylar joined #gluster
13:48 haomaiwang joined #gluster
13:49 squizzi joined #gluster
13:50 plarsen joined #gluster
14:03 chirino joined #gluster
14:13 plarsen joined #gluster
14:14 Gambit15 abhiqqqq: are the files/directories accessed by more than one client? (ie. a share)
14:15 abhiqqqq yes
14:15 abhiqqqq Gambit15:Yes
14:18 Gambit15 What do you mean by "random"?
14:21 Gambit15 Files should be created with the ID of the creating user, but file updates shouldn't affect that. If you want everything within a directory to be created with the same gid/uid, use the setuig/setgid permissions bit.
14:22 Gambit15 Sounds like your problem isn't with gluster, but with the filesystem on the volume & how you're using it
14:39 hagarth joined #gluster
14:39 shaunm joined #gluster
14:42 shaunm joined #gluster
14:42 abhiqqqq Random mean when more I/O is happing to gluster volume. May be when self heal happens. I am using replicate with 2 node setups
14:42 abhiqqqq This was know bug but got fixed in 3.6.3
14:42 abhiqqqq seems still exists in 3.6.9
14:43 abhiqqqq known*
14:52 JesperA joined #gluster
15:00 julim joined #gluster
15:04 robb_nl joined #gluster
15:08 amye joined #gluster
15:13 chirino joined #gluster
15:15 wushudoin joined #gluster
15:17 kpease joined #gluster
15:23 Manikandan joined #gluster
15:24 kpease joined #gluster
15:30 johnmilton joined #gluster
15:35 nbalacha joined #gluster
15:35 nathwill joined #gluster
15:35 radius_ joined #gluster
15:43 ghenry joined #gluster
15:47 johnmilton joined #gluster
15:52 Manikandan joined #gluster
16:08 plarsen joined #gluster
16:12 RameshN joined #gluster
16:14 RameshN_ joined #gluster
16:16 jiffin joined #gluster
16:26 Gambit15 joined #gluster
16:27 skoduri joined #gluster
16:43 shubhendu joined #gluster
16:57 shubhendu joined #gluster
17:01 hagarth joined #gluster
17:06 Slashman joined #gluster
17:19 aspandey joined #gluster
17:24 kpease joined #gluster
17:26 kpease joined #gluster
17:29 squizzi joined #gluster
17:29 alvinstarr joined #gluster
17:41 jiffin1 joined #gluster
17:54 yosafbridge joined #gluster
18:04 abhiqqqq joined #gluster
18:09 gluytium joined #gluster
18:22 ira joined #gluster
18:23 chirino_m joined #gluster
18:31 gowtham joined #gluster
18:42 Philambdo joined #gluster
18:52 ghenry_ joined #gluster
19:03 deniszh joined #gluster
19:08 Elmo_ joined #gluster
19:29 shaunm joined #gluster
20:06 deniszh joined #gluster
20:18 muneerse joined #gluster
20:20 primusinterpares joined #gluster
20:48 ghenry joined #gluster
20:48 ghenry joined #gluster
20:50 julim joined #gluster
21:56 andre_ joined #gluster
21:57 andrez hi
21:57 glusterbot andrez: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
21:58 andrez need help with gluster configuration
21:59 andrez I have a "big server" and 4 smaller ones. I want to stripe the data on the 4 smaller ones but keep a copy of all data (replicate?) on the bigger one, is it possible?
21:59 andrez I cant fit all the data on the smaller ones
22:03 andrez I want to store VM images (raw) and using gluster 3.7.11
22:16 ahino joined #gluster
22:18 hagarth joined #gluster
22:19 andrez I have a "big server" and 4 smaller ones. I want to stripe the data on the 4 smaller ones but keep a copy of all data (replicate?) on the bigger one, is it possible?
22:19 andrez I cant fit all the data on the smaller ones, I want to store VM images (raw) and using gluster 3.7.11
22:40 JoeJulian andrez: You said that already. :P I would avoid ,,(stripe). Look at disburse.
22:40 glusterbot andrez: Please see http://joejulian.name/blog/should-i-use-stripe-on-glusterfs/ about stripe volumes.
22:44 andrez sorry for double posting
22:44 andrez what do you suggest then?
22:44 andrez distribute+replicate?
22:45 JoeJulian If your images exceed the size of your smaller bricks, then that won't work. Like I said, though, look at disburse.
22:45 andrez now I have almost 100 VMs, each with its own raw file
22:48 andrez sorry, disburse or disperse?
22:50 JoeJulian Oh, right, it is disperse.
22:50 JoeJulian oops
22:50 JoeJulian Though if they're interesting in disbursement, I'll not say no.
22:53 andrez can I force a peer to have all the information
22:53 andrez ?
22:55 andrez the problem is that I have a big machine that has all the VMs, but I'm trying to convince my boss to change to a distributed fs (gluster)
22:55 andrez I don't want to "waste" the big machine that we already have
22:56 andrez I would like to do a "local cache" on the hosts that run the VMs
22:58 andrez and the peers would have different disk capacities
23:07 JoeJulian Convincing the boss should be easy. Show him the math. Having all your eggs in one basket is a huge liability issue, the MTBF is terrible and the MTTR probably isn't very good either.
23:09 post-factum ...that feeling when NOC missed the right server and detached database node from network instead of my hypervisor…
23:09 post-factum happy night upgrades
23:10 andrez we've all been there hahaha
23:11 andrez so no problem with peers having different disk capacities?
23:12 andrez gluster will balance it somehow
23:26 amye joined #gluster
23:45 andrez thanks for the tip on disperse option
23:45 andrez cya

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary