Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-12-01

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:05 dnorman joined #gluster
00:15 k4n0 joined #gluster
00:19 nbalacha joined #gluster
00:24 Caveat4U joined #gluster
00:25 Klas joined #gluster
00:29 primehaxor joined #gluster
00:59 farhorizon joined #gluster
01:00 cholcombe joined #gluster
01:03 shdeng joined #gluster
01:22 derjohn_mobi joined #gluster
01:23 kramdoss_ joined #gluster
01:32 mb_ joined #gluster
01:33 haomaiwang joined #gluster
01:45 dnorman joined #gluster
02:04 phileas joined #gluster
02:10 dnorman joined #gluster
02:12 Gambit15 joined #gluster
02:13 aj__ joined #gluster
02:14 haomaiwang joined #gluster
02:17 jvandewege joined #gluster
02:34 panina joined #gluster
02:41 prth joined #gluster
02:48 ilbot3 joined #gluster
02:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:57 Caveat4U joined #gluster
03:00 plarsen joined #gluster
03:04 Lee1092 joined #gluster
03:14 haomaiwang joined #gluster
03:32 sbulage joined #gluster
03:44 magrawal joined #gluster
03:48 atinm joined #gluster
03:48 panina joined #gluster
03:49 Prasad joined #gluster
04:04 k4n0 joined #gluster
04:09 Saravanakmr joined #gluster
04:12 itisravi joined #gluster
04:14 PsionTheory joined #gluster
04:14 haomaiwang joined #gluster
04:22 jiffin joined #gluster
04:22 garamelek joined #gluster
04:23 magrawal joined #gluster
04:23 garamelek hi
04:23 glusterbot garamelek: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
04:23 magrawal joined #gluster
04:25 Shu6h3ndu joined #gluster
04:26 sanoj joined #gluster
04:30 k4n0 joined #gluster
04:32 Saravanakmr joined #gluster
04:42 aravindavk joined #gluster
04:42 RameshN joined #gluster
04:43 sanoj joined #gluster
04:51 dnorman joined #gluster
04:58 k4n0 joined #gluster
04:58 ndarshan joined #gluster
05:02 jkroon joined #gluster
05:03 buvanesh_kumar joined #gluster
05:14 nbalacha joined #gluster
05:14 haomaiwang joined #gluster
05:17 ndarshan joined #gluster
05:18 nishanth joined #gluster
05:19 apandey joined #gluster
05:19 rafi joined #gluster
05:23 Shu6h3ndu joined #gluster
05:32 ashiq joined #gluster
05:34 rafi joined #gluster
05:35 apandey joined #gluster
05:38 garamelek кто нибудь на русском говорит ?
05:40 jiffin garamelek: can u please repeat ur question in english?
05:44 riyas joined #gluster
05:51 garamelek jiffin sorry m i t ry to search russian speaking guys ))
05:52 garamelek i am new in glusterfs , but it looks good
05:53 ankitraj joined #gluster
05:55 prth joined #gluster
06:00 kotreshhr joined #gluster
06:04 k4n0 joined #gluster
06:05 hgowtham joined #gluster
06:08 skoduri joined #gluster
06:11 ShwethaHP joined #gluster
06:12 ppai joined #gluster
06:13 susant joined #gluster
06:13 Muthu joined #gluster
06:14 kdhananjay joined #gluster
06:14 haomaiwang joined #gluster
06:15 buvanesh_kumar joined #gluster
06:27 sbulage joined #gluster
06:27 ShwethaHP joined #gluster
06:29 Utoxin joined #gluster
06:29 mb_ joined #gluster
06:31 Utoxin A couple questions that I haven't been able to find clear answers for yet while googling: Is it a problem to have 3.5.x clients connecting to a 3.8.x server? And is it safe to do a rolling upgrade between those versions if I have the volume set up as distributed replication in a 2x3 setup?
06:32 Utoxin (I understand I need to make sure nothing needs healed, etc.)
06:33 k4n0 joined #gluster
06:36 apandey joined #gluster
06:43 dnorman joined #gluster
06:52 Karan joined #gluster
06:56 devyani7 joined #gluster
07:07 hackman joined #gluster
07:14 haomaiwang joined #gluster
07:15 Caveat4U joined #gluster
07:20 garamelek any1 have expirience with client windows and unix glusterfs?
07:24 bhakti joined #gluster
07:28 msvbhat joined #gluster
07:28 masuberu joined #gluster
07:28 hchiramm_ joined #gluster
07:40 MikeLupe joined #gluster
07:45 buvanesh_kumar joined #gluster
07:55 ppai joined #gluster
07:56 [diablo] joined #gluster
08:03 mhulsman joined #gluster
08:05 prth joined #gluster
08:08 jri joined #gluster
08:14 haomaiwang joined #gluster
08:19 ppai joined #gluster
08:26 amye joined #gluster
08:27 Ramereth joined #gluster
08:27 jerrcs_ joined #gluster
08:30 ws2k3 joined #gluster
08:31 fsimonce joined #gluster
08:33 sbulage joined #gluster
08:36 skoduri joined #gluster
08:36 aj__ joined #gluster
08:37 sanoj joined #gluster
08:38 shdeng joined #gluster
08:47 prth joined #gluster
08:53 sanoj joined #gluster
08:54 ahino joined #gluster
08:54 flying joined #gluster
08:58 skoduri joined #gluster
08:59 ShwethaHP joined #gluster
09:11 prth joined #gluster
09:12 msvbhat joined #gluster
09:14 haomaiwang joined #gluster
09:16 skoduri joined #gluster
09:18 mahendratech joined #gluster
09:22 Slashman joined #gluster
09:23 ppai joined #gluster
09:26 jkroon joined #gluster
09:31 panina joined #gluster
09:35 nishanth joined #gluster
09:40 rastar joined #gluster
09:42 rastar joined #gluster
09:43 hxn joined #gluster
09:43 hxn hi
09:43 glusterbot hxn: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answer.
09:46 hxn I just installed and configured gluster and I am testing it to see if it's the right solution for us. It happens that I have created some test files in a gluster mount point (1000 1MB files) but after 12 hours the files still haven't been copied to the bricks
09:48 Debloper joined #gluster
09:48 hxn could gluster just be waiting for me to do something?
09:49 virusuy joined #gluster
09:51 aj__ joined #gluster
09:55 hgowtham joined #gluster
09:59 jiffin1 joined #gluster
10:00 kshlm hxn, Really? Writes to a gluster mount point should only return after it's actually written to the bricks.
10:01 kshlm Could you describe your setup? It will help identify if something is incorrect.
10:03 hxn kshlm: I've got two storage servers and one frontend server where the gluster volume is mounted
10:04 hxn I know that both storage servers are running and the gluster volume is correctly mounted in the frontend
10:04 LiftedKilt joined #gluster
10:04 rofl____ joined #gluster
10:04 ShwethaHP joined #gluster
10:05 hxn kshlm: Ok, it just happened :) Reading the logs it seems that one of the bricks was offline until a few minutes ago
10:05 ShwethaHP joined #gluster
10:05 hxn they just "synced" now if that's the correct word for this process
10:07 kshlm hxn, "healed" is the term used by gluster. Missing data on a brick is healed from other bricks in the replica set.
10:09 ppai joined #gluster
10:11 Shu6h3ndu joined #gluster
10:11 hxn kshlm: Thank you for your help!
10:14 haomaiwang joined #gluster
10:15 Shu6h3ndu joined #gluster
10:19 ndarshan joined #gluster
10:19 hxn join #ansible
10:19 hxn sorry, wrong window :S
10:27 jiffin1 joined #gluster
10:29 mahendratech hi can any guide me how to enable object versioning on glusterfs Ex. a file can keep copy up to 4-5 modification.
10:29 mahendratech So we can recover  accidentally deleted files
10:32 Bardack_ joined #gluster
10:33 suliba joined #gluster
10:34 lanning joined #gluster
10:35 cloaked1 joined #gluster
10:37 delhage_ joined #gluster
10:37 amye joined #gluster
10:38 d0nn1e_ joined #gluster
10:38 shortdudey123 joined #gluster
10:41 riyas joined #gluster
10:43 kdhananjay joined #gluster
10:43 rofl____ joined #gluster
10:43 rastar joined #gluster
10:43 bhakti joined #gluster
10:43 RameshN joined #gluster
10:43 lalatenduM joined #gluster
10:43 pkalever joined #gluster
10:43 ebbex_ joined #gluster
10:43 kdhananjay joined #gluster
10:46 Karan joined #gluster
10:48 nishanth joined #gluster
10:48 sanoj joined #gluster
10:56 msvbhat joined #gluster
11:02 mahendratech joined #gluster
11:04 sanoj joined #gluster
11:05 jri joined #gluster
11:12 msvbhat joined #gluster
11:13 atinm joined #gluster
11:14 haomaiwang joined #gluster
11:16 DV__ joined #gluster
11:19 haomaiwang joined #gluster
11:25 mhulsman joined #gluster
11:31 mahendratech joined #gluster
11:31 mahendratech joined #gluster
11:34 primehaxor joined #gluster
11:36 mahendratech joined #gluster
11:36 mahendratech joined #gluster
11:39 mahendra_ joined #gluster
11:40 mahendratech joined #gluster
11:41 mhulsman joined #gluster
11:43 hackman joined #gluster
11:51 prth joined #gluster
11:54 jri joined #gluster
11:59 Gambit15 Hey guys, can anyone advise on how to convert rep 2 arbiter 1, to rep 2 arbiter 2?
12:01 jiffin itisravi: ^^
12:02 itisravi Gambit15: there is no arbiter 2 conversion. You can convert a replica 2  volume to an arbiter volume using add-brick command.
12:03 kotreshhr left #gluster
12:05 Gambit15 Is it possible to add a 2nd arbiter to an existing r2a1 setup?
12:06 Gambit15 I'm trying to avoid the volume going R/O when a server goes down
12:07 itisravi Gambit15: what is your current volume config? (gluster volume info)?
12:07 Gambit15 2 x (2 + 1) = 6
12:08 Gambit15 2 pairs of servers, and the first of each pair is the arbiter for the neighbour pair
12:08 haomaiwang joined #gluster
12:09 Gnomethrower joined #gluster
12:10 itisravi Gambit15: As long as one server doesn't host more than one brick of the same replica pair, you should not get R/O if that node goes down.
12:12 itisravi And it looks like that is how your setup is, if I understand your brick placement correctly.
12:14 haomaiwang joined #gluster
12:16 cloph and a hypothetical second arbiter wouldn't help if your two data-bricks go down. You'll have meta-data twice, but of course no way the volume could stay up
12:17 nbalacha joined #gluster
12:17 msvbhat joined #gluster
12:24 Gambit15 Each server hosts only one brick (+ 1 & 3 which host arbiters). I'm aware of the issue of 1 & 3, or 2 & 4 failing, however that's not easily resolved without more servers (on order)
12:26 Gambit15 My concern though, is that in the case of one of the bricks failing, quorum would not longer be >50%, which puts the volume in R/O by default
12:31 18VAAMLWJ joined #gluster
12:33 haomaiwang joined #gluster
12:33 jiffin1 joined #gluster
12:35 haomaiwang joined #gluster
12:38 itisravi Gambit15:  For a given replica pair, if one brick fails, we still have 2 up  => quorum is still >50% right?
12:39 itisravi I think you need to check your mount logs for possible disconnects to the bricks if you are getting EROFS.
12:45 cloph "Each server hosts only one brick (+ 1 & 3 which host arbiters)." this is ambiguous. does the stuff in parenthesis mean that some hosts have both data brick as well as arbiter? Otherwise why that addition? either it is only hosting only a single brick or not...
12:48 jiffin1 joined #gluster
12:51 TvL2386 joined #gluster
12:56 B21956 joined #gluster
13:01 johnmilton joined #gluster
13:02 itisravi cloph: This is what Gambit15 prolly means: https://paste.fedoraproject.org/494661/
13:02 glusterbot Title: #494661 • Fedora Project Pastebin (at paste.fedoraproject.org)
13:03 panina joined #gluster
13:04 * itisravi has to go now
13:04 mhulsman joined #gluster
13:13 Sebbo2 joined #gluster
13:14 haomaiwang joined #gluster
13:22 sanoj joined #gluster
13:26 ira_ joined #gluster
13:32 prth joined #gluster
13:32 buvanesh_kumar joined #gluster
13:35 cloph joined #gluster
13:41 kdhananjay joined #gluster
13:44 unclemarc joined #gluster
13:55 Muthu joined #gluster
13:59 shyam joined #gluster
14:05 jkroon joined #gluster
14:10 unclemarc joined #gluster
14:13 nishanth joined #gluster
14:14 haomaiwang joined #gluster
14:19 shyam joined #gluster
14:20 skylar joined #gluster
14:29 theron joined #gluster
14:34 f0rpaxe joined #gluster
14:38 social joined #gluster
14:38 buvanesh_kumar joined #gluster
14:38 m0zes joined #gluster
14:40 f0rpaxe joined #gluster
14:46 SLIMEEIGHT joined #gluster
14:47 shyam joined #gluster
14:47 buvanesh_kumar_ joined #gluster
14:52 Wizek joined #gluster
14:52 jri joined #gluster
14:58 jri joined #gluster
15:00 mhulsman Is there any script available who gives the placement of bricks in a distributed/replicated environment based on the <volume>/tcp-fuse.vol file
15:02 jkroon joined #gluster
15:05 dnorman joined #gluster
15:13 annettec joined #gluster
15:14 haomaiwang joined #gluster
15:19 Gambit15 itisravi, cloph: Sorry, got called out. Yes, the layout is as Ravi noted. I know an arbiter is still technically a brick, but not a "data" brick as such. And yes, you're correct WRT quorum - I was confusing myself (coffeeeee). Basically, a couple of times a week, random peers seem to lose communications for a few minutes & it's causing a nuisance with my VMs which run on each peer. I'm trying to narrow down the exact cause, however between all of the inte
15:19 nishanth joined #gluster
15:19 Gambit15 Many thanks for the replies though!
15:22 squizzi_ joined #gluster
15:24 Shu6h3ndu joined #gluster
15:28 plarsen joined #gluster
15:29 susant joined #gluster
15:32 Gambit15 joined #gluster
15:47 Caveat4U joined #gluster
15:47 ivan_rossi joined #gluster
16:00 theron joined #gluster
16:01 Lee1092 joined #gluster
16:04 jri joined #gluster
16:06 wushudoin joined #gluster
16:07 wushudoin joined #gluster
16:09 farhorizon joined #gluster
16:13 jiffin joined #gluster
16:16 hxn joined #gluster
16:17 hxn Hello. I would like to know if anyone has tried to back up gluster snapshots using bacula or similar.
16:17 atinm joined #gluster
16:18 hxn The process would be: Snapshots are created from time to time and bacula backs up the snapshot instead of the files in the mount point. This would enable us to avoid problems with files changing while the back up job is running.
16:18 hxn Has anyone tried this? Any advice?
16:21 cloph I never used gluster snapshots before, but I'd rather take the snapshot and mount that snapshot and then sync the data from that r/o mount and wouldn't try to create any snapshot files themselves.
16:21 snehring I'm planning on doing something similar with a future georeplication slave
16:21 cloph Not sure how they cascade and what would happen when you delete an intermediate snapshot (whether that's possible at all)
16:22 snehring on the surface it sounds like a reasonable plan
16:22 hxn The thing is that doing snapshots would enable us to avoid problems with backups happening while healing jobs are happening
16:22 cloph geo-replication is async, so should be fine without snapshots
16:23 cloph geo-replication relies on the gluster gfids, so using external tool to geo-replicate will conflict.
16:24 snehring in my setup we want snapshots of the georeplicated volume on the slave for reasons
16:24 snehring like 'oh we just realized our entire directory got crypto lockered, but it's been this way for a week' reasons
16:26 hxn so what would be the best way to go about this? can we lock a gluster volume while the backup is happening in a way that it cannot be written to? apologies if my question seems dumb, I haven't been using gluster for long
16:27 snehring you could stop the volume, but that would make it completely inaccessible
16:27 cloph for that using the snapshot feature sounds like a reasonable way to go.
16:27 snehring doing a snapshot, backing up that snapshot, sounds good
16:28 cloph you can replicate that by other means than using gluster's geo-replication after all.
16:28 hxn Great. Thank you for your advice :)
16:28 cloph Or otherwise: create the snapshots on the geo-replicated side
16:30 riyas joined #gluster
16:31 Gambit15 I geo-rep our volumes to a backup server running ZFS, and then use ZFS snapshots to maintain a history of the volume. It's far more efficient & sturdy than any other form of incremental backup, and you get all of the data integrity benefits that com with ZFS.
16:31 snehring Gambit15, that is exactly what I have planned :D
16:31 snehring good to hear it's working well for you
16:32 snehring have you had to do any restores from backup yet?
16:34 Gambit15 It's worked out far better in practice than I ever dreamed. Our DCs are currently going through a big overhaul & power issues have been common. Despite this, I've not once had a problem with integrity or corruption & restoring files takes minutes
16:36 snehring cool
16:37 Gambit15 I've scripted an interface where our users can login to the system, select the date & time of the desired snapshot (taken twice daily) & then get FTP access to a R/O mounted snapshot. Super easy
16:38 snehring I haven't quite figred out how we're going to do restores yet
16:38 snehring The target is a couple hundred miles away, but we've got a 10G (soon to be 100G) link between there and here
16:38 Gambit15 Mounting a snapshot takes seconds, the only wait a user has is whilst the transfer the required files to their chosen destination.
16:39 snehring might set up samba with the vfs2 shadow plugin to access the snapshots via the whole 'previous versions' thing on windows
16:39 snehring your ftp solution sounds pretty slick though
16:40 Gambit15 I've read stuff on people having sucess with that, although not used it personally
16:40 Gambit15 I'm only backing up servers & VMs
16:40 haomaiwang joined #gluster
16:40 snehring sure
16:42 * Gambit15 *off to lunch*
16:50 hackman joined #gluster
16:50 jiffin joined #gluster
16:56 shyam joined #gluster
17:05 panina joined #gluster
17:06 rwheeler joined #gluster
17:08 skylar joined #gluster
17:14 haomaiwang joined #gluster
17:25 Karan joined #gluster
17:27 shyam joined #gluster
17:35 rafi joined #gluster
17:41 primehaxor joined #gluster
17:48 vbellur joined #gluster
17:51 armin joined #gluster
17:53 ivan_rossi left #gluster
18:01 dnorman joined #gluster
18:01 dnorman joined #gluster
18:07 theron_ joined #gluster
18:08 prth joined #gluster
18:20 rwheeler joined #gluster
18:24 mhulsman joined #gluster
18:40 prth joined #gluster
18:42 mhulsman joined #gluster
18:45 squizzi_ joined #gluster
18:48 rastar joined #gluster
18:50 mhulsman joined #gluster
18:52 atrius joined #gluster
18:55 hchiramm joined #gluster
18:58 post-factum joined #gluster
19:00 farhorizon joined #gluster
19:02 Philambdo joined #gluster
19:02 ahino joined #gluster
19:06 theron joined #gluster
19:12 JoeJulian garamelek: You asked me what I recommend for connecting windows machines with gluster, nfs or smb. I prefer nfs.
19:13 snehring Really? In my experience the windows nfs client doesn't seem to obey nfs4 acls
19:14 JoeJulian I'm more concerned with performance than acls.
19:15 snehring depends on your use case I guess
19:15 snehring our users have paranoid fantasies where everyone is out to steal their data
19:15 JoeJulian always does. :)
19:17 dnorman joined #gluster
19:18 mhulsman joined #gluster
19:19 vbellur joined #gluster
19:20 vbellur joined #gluster
19:21 vbellur joined #gluster
19:21 vbellur joined #gluster
19:23 vbellur joined #gluster
19:23 vbellur joined #gluster
19:27 virusuy joined #gluster
19:27 virusuy joined #gluster
19:39 hchiramm joined #gluster
19:46 armin joined #gluster
19:48 armin joined #gluster
19:52 farhoriz_ joined #gluster
19:53 mhulsman joined #gluster
20:08 pcdummy joined #gluster
20:09 dnorman joined #gluster
20:16 johnmilton joined #gluster
20:29 panina joined #gluster
20:30 abyss^ JoeJulian: one of the version of gluster introduced improvement that doing ls -l on mounting gluster fs not lasting so long - do you remember which version it was and what was that improvement?
20:34 armin joined #gluster
20:38 hackman joined #gluster
20:40 raghu joined #gluster
20:40 farhorizon joined #gluster
20:41 bwerthmann joined #gluster
20:46 dnorman joined #gluster
20:48 armin joined #gluster
20:49 BuBU291 joined #gluster
20:51 _BuBU29 joined #gluster
20:52 snehring JoeJulian, how poor was the performance you were seeing with smb vs nfs?
20:55 BuBU29 joined #gluster
20:59 rwheeler joined #gluster
21:13 mhulsman joined #gluster
21:33 farhorizon joined #gluster
21:38 shyam joined #gluster
21:40 vbellur joined #gluster
21:41 vbellur1 joined #gluster
21:41 vbellur1 joined #gluster
21:42 vbellur1 joined #gluster
21:43 Jacob843 joined #gluster
21:43 vbellur1 joined #gluster
21:44 vbellur1 joined #gluster
21:45 vbellur joined #gluster
21:46 bluenemo joined #gluster
21:47 vbellur joined #gluster
21:48 vbellur joined #gluster
21:50 vbellur joined #gluster
21:53 farhorizon joined #gluster
21:53 farhorizon joined #gluster
21:57 vbellur joined #gluster
22:14 GPage joined #gluster
22:14 haomaiwang joined #gluster
22:15 GPage Hi we're running a 9 node cluster in a mission critical environment and we're currently on 3.7.3. We've had no problems with that version but we want to upgrade to 3.7.1. Is there a way to do that with 0 dowtime?
22:15 GPage I.E rolling upgrade of the nodes?
22:15 GPage 3.7.17
22:32 arc0 joined #gluster
22:32 vbellur joined #gluster
22:34 Caveat4U_ joined #gluster
22:42 Caveat4U joined #gluster
22:55 annettec joined #gluster
22:57 Caveat4U joined #gluster
22:59 cloaked1 left #gluster
23:12 masber joined #gluster
23:14 hchiramm_ joined #gluster
23:17 aj__ joined #gluster
23:18 social joined #gluster
23:29 caitnop joined #gluster
23:41 cliluw joined #gluster
23:49 cliluw joined #gluster
23:52 farhorizon joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary