Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-11-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:00 dlambrig_ joined #gluster
00:01 haomaiwa_ joined #gluster
00:01 wchance ok we will give it a shot tomorrow
00:03 zhangjn joined #gluster
00:03 hgichon joined #gluster
00:12 bluenemo joined #gluster
00:19 pocketprotector joined #gluster
00:21 calavera joined #gluster
00:24 dlambrig_ joined #gluster
00:46 mlhamburg joined #gluster
00:54 mlncn joined #gluster
00:54 plarsen joined #gluster
00:54 B21956 joined #gluster
01:01 zhangjn joined #gluster
01:01 haomaiwa_ joined #gluster
01:02 EinstCrazy joined #gluster
01:16 TheSeven joined #gluster
01:27 gzcwnk joined #gluster
01:37 DV joined #gluster
01:41 cliluw How do I increase the Gluster logging level?
01:43 p0rtal you might want to pick up CentOS. That is basically RHEL without the brand and support if I were to say in very simple terms
01:50 gem joined #gluster
02:01 haomaiwa_ joined #gluster
02:02 p0rtal joined #gluster
02:05 edong23 joined #gluster
02:05 dlambrig_ joined #gluster
02:08 JoeJulian cliluw: "gluster volume set help" search for log-level.
02:09 cliluw JoeJulian: Oh, I saw that but doesn't that JoeJulian: I want to increase the log level for everything in Gluster, not just for a particular volume.
02:11 JoeJulian Yeah, that's not something you can set.
02:11 nangthang joined #gluster
02:11 JoeJulian You can change the log-level for glusterd on the command line or in /etc/glusterfs/glusterd.vol, but the rest run off the volume settings.
02:12 cliluw JoeJulian: Increasing the log level for glusterd should do it for me. Do you know the command for that?
02:12 JoeJulian vim /etc/glusterfs/glusterd.vol
02:12 JoeJulian ;)
02:13 JoeJulian If this is just temporarily to diagnose a problem, I always just "glusterd --debug"
02:13 cliluw JoeJulian: Ok, that works for me. Thank you.
02:13 JoeJulian You're welcome.
02:14 JoeJulian What problem are you hunting?
02:14 harish joined #gluster
02:15 cliluw JoeJulian: I'm trying to use the NFS Ganesha integration.
02:16 harish joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
03:02 haomaiwa_ joined #gluster
03:14 bharata-rao joined #gluster
03:17 ayma joined #gluster
03:37 overclk_ joined #gluster
03:43 p0rtal joined #gluster
03:45 p0rtal_ joined #gluster
03:48 shubhendu_ joined #gluster
03:55 gem joined #gluster
03:56 kotreshhr joined #gluster
03:59 plarsen joined #gluster
04:00 itisravi joined #gluster
04:04 atinm joined #gluster
04:09 bharata_ joined #gluster
04:10 nbalacha joined #gluster
04:12 ashiq joined #gluster
04:18 sakshi joined #gluster
04:19 ppai joined #gluster
04:21 neha_ joined #gluster
04:22 Bhaskarakiran joined #gluster
04:23 Manikandan joined #gluster
04:25 Humble joined #gluster
04:26 p0rtal joined #gluster
04:31 nbalacha joined #gluster
04:32 aravindavk joined #gluster
04:33 TheSeven joined #gluster
04:38 p0rtal joined #gluster
04:41 nishanth joined #gluster
04:51 pppp joined #gluster
04:56 hgowtham joined #gluster
04:57 hgowtham_ joined #gluster
04:57 hgowtham_ joined #gluster
04:58 hgowtham_ joined #gluster
05:04 ndarshan joined #gluster
05:05 rafi joined #gluster
05:07 vimal joined #gluster
05:15 vmallika joined #gluster
05:16 zhangjn joined #gluster
05:18 overclk joined #gluster
05:19 mjrosenb is there anything I can do to get gluster to get gluster to re-create the link in .glusterfs for every file on the system?
05:19 mjrosenb or should I just do it with a python script, since that is pretty straightforward?
05:20 jiffin joined #gluster
05:21 overclk mjrosenb: lookup() [named] should create the .glusterfs linkage
05:21 raghu joined #gluster
05:33 kdhananjay joined #gluster
05:33 ramky joined #gluster
05:34 skoduri joined #gluster
05:36 hagarth joined #gluster
05:37 RameshN_ joined #gluster
05:40 anil joined #gluster
05:44 coredump joined #gluster
05:54 arcolife joined #gluster
05:55 suliba joined #gluster
05:58 arcolife joined #gluster
06:01 rp_ joined #gluster
06:07 zhangjn joined #gluster
06:09 kshlm joined #gluster
06:16 kanagaraj joined #gluster
06:20 haomaiwa_ joined #gluster
06:22 karnan joined #gluster
06:22 vmallika joined #gluster
06:30 Lee1092 joined #gluster
06:30 lalatenduM joined #gluster
06:31 neha_ joined #gluster
06:34 nangthang joined #gluster
06:45 spalai joined #gluster
06:46 gem joined #gluster
06:50 creshal joined #gluster
06:50 ramteid joined #gluster
06:51 p0rtal joined #gluster
06:52 klaas joined #gluster
06:52 dusmant joined #gluster
06:54 rgarcia joined #gluster
06:55 mjrosenb overclk: it should, but it looks like it only actually does that the first time you try to look something up.
06:57 rgarcia Hi, I've configured a cluster with 2 nodes in replica. When I take one of the replicas down by shutting down one of the servers the other mount becomes inaccessible for a while and then it comes back. Is this downtime expected?
06:58 neha_ joined #gluster
06:59 creshal rgarcia: Default timeout is something like 40 seconds, I think.
06:59 auzty joined #gluster
07:00 shubhendu_ joined #gluster
07:01 haomaiwa_ joined #gluster
07:02 mjrosenb also, it'll probably be faster if it doesn't involve stat'ing a few million files on a not-very-fast client.
07:09 mhulsman joined #gluster
07:11 neha_ joined #gluster
07:12 rgarcia creshal: Looks like you are right! Thanks for the hint - found this thread: http://serverfault.com/questions/619355/how-to-lower-gluster-fs-down-peer-timeout-reduce-down-peer-impact
07:12 glusterbot Title: high availability - How to lower Gluster FS down peer timeout / reduce down peer impact? - Server Fault (at serverfault.com)
07:13 F2Knight joined #gluster
07:14 overclk mjrosenb: how did you end up with a missing .glusterfs linkage on the first place?
07:15 rgarcia I've set the ping-timeout and it seems to work. I wonder why it is set to such a high value by default if makes the other replica unavailable.
07:20 jtux joined #gluster
07:34 Bhaskarakiran joined #gluster
07:38 fsimonce joined #gluster
07:44 Bhaskarakiran joined #gluster
07:45 rafi joined #gluster
07:48 jtux joined #gluster
07:48 kovshenin joined #gluster
07:56 LebedevRI joined #gluster
07:58 bhuddah joined #gluster
08:01 haomaiwa_ joined #gluster
08:02 fabio joined #gluster
08:02 ramky joined #gluster
08:13 Raide joined #gluster
08:28 ramky joined #gluster
08:35 rafi joined #gluster
08:37 mlhamburg1 joined #gluster
08:42 atalur joined #gluster
08:43 morse joined #gluster
08:44 RameshN_ joined #gluster
08:48 itisravi joined #gluster
08:50 ivan_rossi joined #gluster
08:52 kotreshhr joined #gluster
08:53 Raide joined #gluster
08:56 [Enrico] joined #gluster
09:04 overclk joined #gluster
09:04 overclk_ joined #gluster
09:06 ndarshan joined #gluster
09:07 shubhendu_ joined #gluster
09:09 deepakcs joined #gluster
09:14 Slashman joined #gluster
09:15 cuqa_ joined #gluster
09:33 itisravi_ joined #gluster
09:36 arcolife joined #gluster
09:40 haomaiwang joined #gluster
09:41 mhulsman joined #gluster
09:41 jamesc joined #gluster
09:42 kotreshhr joined #gluster
09:46 jamesc I have a gluster setup replication x 2 and I deleted a file from one of them. on the nfs client mounted on the server with the missing file it is not prescent, until I guess its there and stat it. Then it magically appears on both servers. How do I force this behaviour
09:50 Bhaskarakiran joined #gluster
09:55 wchance joined #gluster
09:56 Raide joined #gluster
10:00 deniszh joined #gluster
10:01 jiffin jamesc: did u delete file from the server??
10:01 jamesc yes directly from the brick to simulate a corruption.
10:02 jiffin i can explain why that happened , its just a gluster behavior
10:03 zhangjn joined #gluster
10:03 jiffin the gluster nfs server establishes connection between every bricks(backend server)
10:04 jiffin in case of replica pair when a lookup, read etc performed(fops which does not modify data) , it sends request to one of the pair
10:05 jamesc I tried a self heal but the bricks are't in sync still.
10:05 creshal You'll need to rebalance the volume to make gluster detect the change manually, I think?
10:05 jiffin jamesc: heal will be enough
10:06 jiffin jamesc: i hope , you did a ls on that path
10:06 jiffin jamesc: in ur case
10:07 jamesc I tried gluster volume heal data on both servers, and did info and could see stuff happenning but when it finnished the still were not in sync
10:08 Bhaskarakiran joined #gluster
10:08 zhangjn joined #gluster
10:08 kotreshhr left #gluster
10:09 jiffin u mean the deleted file didn't reappear on the server
10:10 skoduri joined #gluster
10:15 deniszh1 joined #gluster
10:18 deniszh joined #gluster
10:19 julim_ joined #gluster
10:23 jamesc yes
10:24 jiffin jamesc: heal can be done in two ways : 1.) using heal command 2.) performing lookup on that path from client
10:24 jiffin jamesc: i don't why first one didn't work for u
10:25 jiffin *i don't know why first one didn't work for u
10:26 jamesc no me neither
10:26 jiffin jamesc: the /var/log/glusterfs/glustershd.log
10:27 jamesc if I do a recursive find it only sees the sub set of files.
10:27 jiffin may be gives you why didn't it work
10:27 jiffin gives you idea about  why didn't it work
10:27 johnmark joined #gluster
10:28 jiffin jamesc:  any leads there ??
10:29 firemanxbr joined #gluster
10:29 lbarfield joined #gluster
10:38 bharata joined #gluster
10:41 overclk joined #gluster
10:43 jamesc hmm the act of guessing the file when it recovers it leaves no log trace what so ever :(
10:43 Akee joined #gluster
10:45 atrius` silly question... we're looking to replace a single NFS server with a gluster cluster. i imagine if we went with the replicated option we could lose all but one cluster member and still have all storage available. however, how many could we lose if we with with replicated-distributed? do you lose access to the files on the missing nodes but retain access to the rest?
10:45 ashka joined #gluster
10:47 ivan_rossi atrius: depends on quorum too. if you lose quorum the volume will go read-only
10:47 jamesc atrius`: ideally you have 4 bricks for replicated -distributed
10:48 jamesc and 4 servers
10:48 atrius` ivan_rossi: that might be acceptable in this instance :)
10:49 atrius` i'm needing to store a bit over 1TB of stuff that needs to, at the minimum, be available for reads no matter what. however, i'd like to avoid having to consume 4TB of disk for full replication unless that's the only way to achieve the goal.
10:50 ivan_rossi atrius: mainly reads?
10:51 jamesc you might want to consider the nfs client for many reads
10:52 ivan_rossi if so, i would go with replica-3
10:52 jamesc as it has the caching component
10:53 pppp joined #gluster
10:57 Raide joined #gluster
10:59 gildub joined #gluster
11:07 Manikandan joined #gluster
11:07 ashiq joined #gluster
11:13 atrius` ivan_rossi: i believe it is mainly reads (sorry, was getting DC a lot)
11:14 atrius` jamesc: if we go with the NFS client, won't we lose automatic failover and therefore a lot of the benefits of doing this in the first place? wouldn't we then have to unmount and mount another server if the first became unresponsive?
11:15 mhulsman joined #gluster
11:16 jamesc atrius`: I have been doing well with using vrrpd / heartbeat IP ha setup
11:17 jamesc no unmounting involved since gluster keeps all the globla inode stuff abstracted
11:17 atrius` jamesc: basically providing a floating IP scenario?
11:17 jamesc ye
11:18 atrius` so you end up with vrrpd/heartbeat -> glusterFS_systems and the clients just mount the float and you're done. correct?
11:18 jamesc correct.
11:18 atrius` nice. clients are completely unaware?
11:18 jamesc yes
11:21 atrius` jamesc: that said, i'd still need (storage_requirement X replicas) if i wanted to have a replica go completely unavailable and clients not notice. is that correct?
11:22 ndarshan joined #gluster
11:24 mhulsman joined #gluster
11:27 spalai joined #gluster
11:28 jiffin atrius`: yes for reads, but if u enable quorum then writes won't work
11:32 panitaliemom joined #gluster
11:41 RameshN joined #gluster
11:41 RameshN :-)
11:43 lpabon joined #gluster
11:49 ivan_rossi jamesc: any advantages in  vrrpd/heartbeat wrt and NFS/ctdb setup (I did the latter)?
11:50 ivan_rossi s/and//
11:51 glusterbot What ivan_rossi meant to say was: jamesc: any advantages in  vrrpd/heartbeat wrt  NFS/ctdb setup (I did the latter)?
11:51 ashiq joined #gluster
11:53 zhangjn joined #gluster
11:53 EinstCrazy joined #gluster
11:53 jamesc I have a replica x 2 two physical servers vrrp and clients using nfs this is advantage for many writes as nfs does caching. I can't say for other setups
11:53 jamesc sorry /writes/reads/
11:55 atrius` jamesc: i presume in the setup you describe, or one using the FUSE client but same replica count, you can lose an entire replica and everything is fine?
11:56 jamesc atrius`: Yes the HA IP will migrate and the files will server from the other online replica
11:57 atrius` jamesc: is it known how well it handles the situation of "replica (system) alive, but barely responsive"?
11:58 jamesc sorry not seen that issue
11:59 atrius` and i haven't been able to find documentation that really says one way or the other
11:59 atrius` guess i'll have to try and test it to figure out xD
11:59 Manikandan joined #gluster
12:02 jamesc I recommend setting up two VMs installing gluster 3.7 then tinkering.
12:02 jamesc I have just this nfs setup and I delted a file on one then I managed to restore it by doing "gluster volume  data start force"
12:02 overclk_ joined #gluster
12:04 ramky joined #gluster
12:06 rafi joined #gluster
12:10 rafi joined #gluster
12:12 ira joined #gluster
12:14 atrius` jamesc: that's what i'm doing now
12:14 nbalacha joined #gluster
12:18 rafi joined #gluster
12:21 B21956 joined #gluster
12:23 sakshi joined #gluster
12:29 dusmant joined #gluster
12:34 B21956 joined #gluster
12:34 spalai joined #gluster
12:38 ctria joined #gluster
12:55 harish_ joined #gluster
12:56 harish_ joined #gluster
12:56 B21956 joined #gluster
13:00 anrao joined #gluster
13:13 spalai joined #gluster
13:15 gem joined #gluster
13:21 overclk joined #gluster
13:34 rafi joined #gluster
13:38 hagarth joined #gluster
13:38 dgandhi joined #gluster
13:44 mlncn joined #gluster
13:53 DV__ joined #gluster
13:55 Akee joined #gluster
13:57 shyam joined #gluster
13:59 Pupeno joined #gluster
13:59 Pupeno joined #gluster
14:05 unclemarc joined #gluster
14:23 ayma joined #gluster
14:28 bennyturns joined #gluster
14:33 skoduri joined #gluster
14:43 KennethDejonghe joined #gluster
14:51 tomatto joined #gluster
14:53 cornfed78 joined #gluster
14:57 skylar joined #gluster
15:02 cholcombe joined #gluster
15:03 kshlm joined #gluster
15:04 skylar joined #gluster
15:05 skylar joined #gluster
15:07 plarsen joined #gluster
15:12 ivan_rossi left #gluster
15:27 P0w3r3d joined #gluster
15:35 rafi joined #gluster
15:43 B21956 joined #gluster
15:44 mlncn joined #gluster
15:45 bowhunter joined #gluster
15:46 jiffin joined #gluster
15:46 Gill_ joined #gluster
15:50 ro_ joined #gluster
16:06 mlncn joined #gluster
16:09 squizzi joined #gluster
16:10 rafi joined #gluster
16:23 overclk joined #gluster
16:24 p0rtal joined #gluster
16:37 muneerse2 joined #gluster
16:40 Pupeno joined #gluster
16:41 plarsen joined #gluster
16:55 rafi1 joined #gluster
16:56 F2Knight joined #gluster
17:00 Pupeno joined #gluster
17:01 kovshenin joined #gluster
17:03 DV joined #gluster
17:04 plarsen joined #gluster
17:17 hgowtham joined #gluster
17:31 rafi joined #gluster
17:46 overclk joined #gluster
17:53 kovshenin joined #gluster
17:56 mikedep333 joined #gluster
17:58 p0rtal joined #gluster
18:03 mikedep333 joined #gluster
18:03 bennyturns joined #gluster
18:04 p0rtal joined #gluster
18:05 jobewan joined #gluster
18:05 p0rtal_ joined #gluster
18:06 mikedep333 left #gluster
18:19 Rapture joined #gluster
18:20 p0rtal joined #gluster
18:27 Rapture joined #gluster
18:31 Pupeno joined #gluster
18:45 plarsen joined #gluster
18:46 mhulsman joined #gluster
18:59 F2Knight joined #gluster
19:00 Pupeno joined #gluster
19:04 bluenemo joined #gluster
19:05 dblack joined #gluster
19:20 shaunm joined #gluster
19:22 squizzi joined #gluster
19:24 mlncn joined #gluster
19:26 spalai joined #gluster
19:27 plarsen joined #gluster
19:31 mhulsman joined #gluster
19:36 tomatto joined #gluster
19:55 mtl1 joined #gluster
19:57 David_Vargese joined #gluster
19:59 mtl1 Hi, I'm trying to find out if it's possible to do this somehow: gluster volume set staff-mount nfs.rpc-auth-allow 10.254.0.0/16
20:01 mtl1 I'm on 3.7, and any time I try to use a * in the IP, I just get a "gluster: No match." error. Can anyone tell me what I'm doing wrong?
20:02 kovshenin joined #gluster
20:13 B21956 joined #gluster
20:14 mhulsman joined #gluster
20:17 ayma hi, I was wondering once I've created my active-active HA-ganesha with gluster and I want to add another gluster volume to the cluster do I just run the regular gluster create cmd and make sure i have the ganesha_enabled volume option set to on?  Or do I need to run more cmds like gluster cluster-enable again after the new gluster volume is created?  Or is there another procedure for adding more volumes to gluster?
20:24 dlambrig_ joined #gluster
20:27 kovshenin joined #gluster
20:36 mtl1 left #gluster
20:38 mhulsman joined #gluster
20:50 Pupeno joined #gluster
21:03 ctria joined #gluster
21:07 squizzi joined #gluster
21:14 mlncn joined #gluster
21:20 bennyturns joined #gluster
22:12 Pupeno joined #gluster
22:19 timotheus1_ joined #gluster
22:20 squizzi joined #gluster
22:35 timotheus1 joined #gluster
22:51 plarsen joined #gluster
23:02 timotheus1 joined #gluster
23:09 p0rtal_ joined #gluster
23:23 bluenemo joined #gluster
23:34 ira joined #gluster
23:38 ira joined #gluster
23:45 armyriad joined #gluster
23:48 Peppard joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary