Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2015-10-19

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiwang joined #gluster
00:44 theron joined #gluster
00:47 vimal joined #gluster
00:48 halloo joined #gluster
00:53 suliba joined #gluster
01:07 EinstCrazy joined #gluster
01:42 Lee1092 joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:48 haomaiwa_ joined #gluster
01:51 harish joined #gluster
01:56 theron joined #gluster
01:57 nangthang joined #gluster
02:12 harish joined #gluster
02:22 haomaiwa_ joined #gluster
02:39 haomaiwa_ joined #gluster
02:44 maveric_amitc_ joined #gluster
02:47 suliba joined #gluster
02:59 halloo joined #gluster
02:59 suliba joined #gluster
03:07 haomaiwa_ joined #gluster
03:13 sripathi joined #gluster
03:17 shaunm joined #gluster
03:23 sripathi joined #gluster
03:25 [7] joined #gluster
03:29 purpleidea joined #gluster
03:37 nbalacha joined #gluster
03:38 stickyboy joined #gluster
03:48 ramteid joined #gluster
03:52 Humble joined #gluster
03:56 itisravi joined #gluster
04:00 sripathi1 joined #gluster
04:04 haomaiwang joined #gluster
04:10 gem joined #gluster
04:14 sakshi joined #gluster
04:19 kdhananjay joined #gluster
04:19 kdhananjay joined #gluster
04:27 jiffin joined #gluster
04:28 halloo joined #gluster
04:42 rafi1 joined #gluster
04:45 deepakcs joined #gluster
04:46 skoduri joined #gluster
04:48 free_amitc_ joined #gluster
04:57 ppai joined #gluster
05:01 haomaiwang joined #gluster
05:02 shubhendu joined #gluster
05:03 haomaiwang joined #gluster
05:07 ndarshan joined #gluster
05:24 overclk joined #gluster
05:24 aravindavk joined #gluster
05:28 poornimag joined #gluster
05:30 hagarth joined #gluster
05:30 ashiq joined #gluster
05:32 kotreshhr joined #gluster
05:34 yazhini joined #gluster
05:35 kshlm joined #gluster
05:36 ron-slc joined #gluster
05:38 Bhaskarakiran joined #gluster
05:42 Manikandan joined #gluster
05:43 ppai joined #gluster
05:44 atalur joined #gluster
05:56 Bhaskarakiran joined #gluster
05:58 Humble joined #gluster
06:07 jwd joined #gluster
06:07 ramky joined #gluster
06:09 skoduri joined #gluster
06:11 hgowtham joined #gluster
06:12 mhulsman joined #gluster
06:24 vmallika joined #gluster
06:25 jtux joined #gluster
06:26 nangthang joined #gluster
06:27 GB21 joined #gluster
06:31 hchiramm joined #gluster
06:31 zhangjn joined #gluster
06:37 kotreshhr joined #gluster
06:39 free_amitc_ joined #gluster
06:45 haomaiwa_ joined #gluster
06:50 purpleidea joined #gluster
06:50 purpleidea joined #gluster
07:01 deepakcs joined #gluster
07:01 haomaiwa_ joined #gluster
07:03 LebedevRI joined #gluster
07:10 atrius joined #gluster
07:12 nangthang joined #gluster
07:14 [Enrico] joined #gluster
07:15 nangthang joined #gluster
07:17 yawkat joined #gluster
07:19 spalai joined #gluster
07:28 gildub joined #gluster
07:29 xavih joined #gluster
07:30 malevolent joined #gluster
07:32 fsimonce joined #gluster
07:33 aravindavk joined #gluster
07:33 itisravi joined #gluster
07:42 xavih joined #gluster
07:42 malevolent joined #gluster
07:47 xavih joined #gluster
07:47 malevolent joined #gluster
07:49 spalai joined #gluster
07:51 kotreshhr joined #gluster
07:53 xavih joined #gluster
07:53 malevolent joined #gluster
07:54 armyriad joined #gluster
07:56 ctria joined #gluster
08:03 gem joined #gluster
08:04 ZuLu[UM0215] joined #gluster
08:07 sakshi joined #gluster
08:11 haomaiwang joined #gluster
08:12 RayTrace_ joined #gluster
08:12 arcolife joined #gluster
08:13 RayTrace_ joined #gluster
08:14 RayTrace_ joined #gluster
08:17 maveric_amitc_ joined #gluster
08:20 haomai___ joined #gluster
08:21 Slashman joined #gluster
08:21 GB21 joined #gluster
08:25 mlhamburg joined #gluster
08:26 dusmant joined #gluster
08:26 xavih joined #gluster
08:27 malevolent joined #gluster
08:34 deniszh joined #gluster
08:42 ctrianta joined #gluster
08:45 jwaibel joined #gluster
08:55 frozengeek joined #gluster
08:56 mbukatov joined #gluster
08:56 maveric_amitc_ joined #gluster
08:57 maveric_amitc_ joined #gluster
09:00 arcolife joined #gluster
09:01 hagarth joined #gluster
09:02 ctrianta joined #gluster
09:08 malevolent joined #gluster
09:11 haomaiwa_ joined #gluster
09:14 tru_tru joined #gluster
09:17 Norky joined #gluster
09:22 Marqin joined #gluster
09:23 Marqin Is it possible to shrink replica set? i need to remove a brick, but gluster doesn't allow me with error "Remove brick incorrect brick count of 1 for replica 2"
09:24 Marqin in Gluster 3.1
09:27 RayTrace_ joined #gluster
09:31 sakshi joined #gluster
09:33 Bhaskarakiran_ joined #gluster
09:34 haomaiwa_ joined #gluster
09:34 Bhaskarakiran joined #gluster
09:35 R0ok__ joined #gluster
09:37 Sunghost joined #gluster
09:39 stickyboy joined #gluster
09:47 hchiramm joined #gluster
09:47 raghu joined #gluster
09:48 Humble joined #gluster
09:52 Sunghost Hi, i have got a problem with my distributed 2 brick glusterfs and i think its a splitbrain, but need help and infos.
09:55 Sunghost Folder mounted via nfs shows no files, but on brick self all files exists
09:56 Sunghost splitbrain? how can i solve this? copy straight from brick to mountpoint?
09:56 mhulsman joined #gluster
09:59 Marqin huh
10:00 Marqin i've edited .vol file, restared glusterfs volume and glusterfs-server
10:00 Marqin but gluster volume info still shows old info
10:01 77CAA1S1S joined #gluster
10:02 Trefex joined #gluster
10:04 firemanxbr joined #gluster
10:10 Bhaskarakiran joined #gluster
10:10 RayTrace_ joined #gluster
10:15 frozengeek joined #gluster
10:18 Sunghost sorry Marqin, i have no idea, perhaps wrong character in file, or wrong parameter? i whould check the changes again. what do you have changed?
10:18 Sunghost Any help to my problem on distributed glusterfs volume with no files on mountpoint but in folder of one brick?
10:19 hagarth Sunghost: have you checked the gluster nfs server logs?
10:19 vmallika joined #gluster
10:19 hagarth Marqin: what did you edit in the .vol file?
10:20 Sunghost i think its not an nfs failure but one like split brain. i lost my raid because of one bad disk with lot of bad blocks, i had to run xfs_repair and it deltes more than 10tb data
10:20 Sunghost as i looked into mountpoint the files are gone, but on the brick in "storefolder" they are there, so i think its missing the "brick" files
10:21 hagarth Sunghost: is your failed raid operational now?
10:22 Sunghost yes all is fine, one missing disk out of raid6 but thats not the problem
10:23 harish_ joined #gluster
10:25 hagarth Sunghost: do you see files in all the bricks that form a replicated set?
10:27 Sunghost not replicated -> distributed
10:28 Sunghost but i can sie e.g. 3gb iso file on the brick
10:28 Sunghost but not on the volume mountpoint
10:28 Sunghost so i think the files <- dont know the name of that -> are missing for glusterfs
10:30 hagarth Sunghost: which version of gluster and what is the backend filesystem for the bricks?
10:31 bluenemo joined #gluster
10:31 mbukatov joined #gluster
10:32 Saravana_ joined #gluster
10:33 maveric_amitc_ joined #gluster
10:34 Sunghost system is wheezy for brick1 and because of the problems on brick2 stretch both with xfs filesystem and gluster 3.6.1 as distributed volume
10:34 Sunghost so 2 bricks each with mdadm raid6
10:36 Sunghost so have to start server
10:37 Sunghost i talk about the files in .glusterfs directory
10:37 Sunghost i think there are some missing, so i cant see them on vol mount via nfs - perhaps its like a splitbrain?!
10:38 Sunghost i tried to copy from brick /dir1/sub1 -> mountvol /dir2/sub2 - but the files are missing, i think cluster take filenames and realised that
10:38 Sunghost sorry for my technical understanding of glusterfs
10:41 hagarth Sunghost: split-brains in distributed volumes are very unlikely.
10:41 hagarth Sunghost: does gluster volume status <volname> list all processes as being online?
10:41 Sunghost ok - but is this a split-brain situation
10:42 hagarth Sunghost: Doesn't seem to be the case
10:42 Sunghost yes - all seems ok and running
10:42 Sunghost whould it be a possibility to move the files from brick to other server or disk which is not in volume
10:42 Sunghost after that backwards?
10:43 hagarth Sunghost: you should be able to .. how much of data do you have?
10:43 Sunghost whould a fix layout or so fix this
10:45 Sunghost wait a moment i see that brick1 is offline
10:45 Saravana_ joined #gluster
10:45 Sunghost Staging failed on clusternode02. Error: Volume profile op get failed
10:45 Sunghost acutal mesasge, after server restart
10:46 Sunghost 02 is the problem one
10:50 hagarth Sunghost: try to get brick1 online. that should help to resolve the problem.
10:50 hagarth @channelstats
10:50 glusterbot hagarth: On #gluster there have been 422190 messages, containing 15830733 characters, 2606146 words, 9365 smileys, and 1337 frowns; 1825 of those messages were ACTIONs.  There have been 196994 joins, 4832 parts, 192506 quits, 29 kicks, 2460 mode changes, and 8 topic changes.  There are currently 287 users and the channel has peaked at 299 users.
10:52 Sunghost ok reboot both systems but error messages still apears
10:52 rjoseph joined #gluster
10:53 hagarth Sunghost: is the brick status still being seen as offline?
10:54 Sunghost status shows only brick2
10:54 Sunghost yes brick one is offline
10:55 hagarth Sunghost: can you check the brick's log file to see why it is offline?
10:57 Sunghost no entries in brick log
10:58 Sunghost brick1 is shown on peer status
11:02 Sunghost i remember after it has worked and i have no problems i recognized the xfs errors on brick2, after that i upgraded from wheezy -> jessie -> stretch and repaired the file system
11:03 Sunghost could it be that some packages which glusterfs needs are upgraded too and now i got this errors?
11:03 17SADUPGG joined #gluster
11:03 Saravana_ joined #gluster
11:04 Philambdo joined #gluster
11:10 Sunghost i think thats the problem
11:10 Sunghost is glusterfs for debian stretch released?
11:10 Sunghost i found on brick2 (stretch) option 'rpc-auth.auth-glusterfs' is not recognized
11:11 Sunghost bevor option 'listen-port' is deprecated, preferred is 'transport.socket.listen-port', continuing with correction
11:23 vmallika joined #gluster
11:23 tomatto joined #gluster
11:30 overclk joined #gluster
11:33 Trefex joined #gluster
11:33 hagarth Sunghost: can you paste the last few lines of logs from brick2?
11:34 Sunghost 0-vol2-server: accepted client from clusternode01-3370-2015/10/19-11:01:13:330862-vol2-client-1-0-0 (version: 3.6.1)
11:34 Sunghost [2015-10-19 11:28:57.859560] I [MSGID: 115036] [server.c:552:server_rpc_notify] 0-vol2-server: disconnecting connection from clusternode01-3370-2015/10/19-11:01:13:330862-vol2-client-1-0-0
11:34 Sunghost [2015-10-19 11:28:57.859700] I [MSGID: 101055] [client_t.c:419:gf_client_unref] 0-vol2-server: Shutting down connection clusternode01-3370-2015/10/19-11:01:13:330862-vol2-client-1-0-0
11:36 hagarth Sunghost: this indicates that the brick process is online. is it still being listed as offline in volume status?
11:36 Sunghost ?
11:36 Sunghost volume status on brick2, right
11:37 Sunghost volume status vol2 is still ok for brick and nfs
11:37 Sunghost but brick1 are not ok
11:37 Sunghost is
11:40 haomaiwa_ joined #gluster
11:41 hagarth Sunghost: ah, in that case can you paste the last few lines of brick1?
11:43 Akee joined #gluster
11:46 Sunghost brick1 log after volume content
11:46 Sunghost [2015-10-19 11:44:53.939762] I [login.c:82:gf_auth] 0-auth/login: allowed user names: c6da5808-4b51-44ba-8b78-788332e911ed
11:46 Sunghost [2015-10-19 11:44:53.939849] I [server-handshake.c:585:server_setvolume] 0-vol2-server: accepted client from clusternode01-3033-2015/10/19-11:44:49:910677-vol2-client-0-0-0 (version: 3.6.1)
11:48 itisravi joined #gluster
11:59 R0ok_ joined #gluster
11:59 armyriad joined #gluster
11:59 plarsen joined #gluster
12:02 RayTrace_ joined #gluster
12:03 cabillman joined #gluster
12:05 Sunghost i will downgrade to jessie
12:17 poornimag joined #gluster
12:24 unclemarc joined #gluster
12:26 theron joined #gluster
12:34 ira joined #gluster
12:35 kdhananjay joined #gluster
12:37 tomatto hi, please i have two virtual servers on one physical server and want to know if is better to use gluster with distributed volume or mount one partition in both virtual servers. and when i want to add replication with other server, if it can be done on the fly and how?
12:43 haomaiwa_ joined #gluster
12:46 Sunghost imo i dont would use glusterfs on 2 vms on same host, or what is the  advantage of your plan?
12:46 Sunghost ok downgraded and now gluster-server didnt start on brick2 but on brick1 ,)
12:46 Sunghost invoke-rc.d: initscript glusterfs-server, action "start" failed.
12:54 mmckeen joined #gluster
12:56 overclk_ joined #gluster
13:00 haomaiwa_ joined #gluster
13:08 jwd joined #gluster
13:16 Sunghost any idea how i can solve this: invoke-rc.d: initscript glusterfs-server, action "start" failed
13:16 chirino joined #gluster
13:18 plarsen joined #gluster
13:21 theron joined #gluster
13:25 haomaiwa_ joined #gluster
13:26 mpietersen joined #gluster
13:27 julim joined #gluster
13:28 shyam joined #gluster
13:28 kotreshhr left #gluster
13:32 spalai left #gluster
13:32 kotreshhr joined #gluster
13:35 arcolife joined #gluster
13:39 haomaiw__ joined #gluster
13:43 overclk joined #gluster
13:43 bennyturns joined #gluster
13:44 Sunghost does this helps to  find a solution wrong op-version
13:46 hamiller joined #gluster
13:46 haomaiwa_ joined #gluster
13:51 hchiramm joined #gluster
13:52 Humble joined #gluster
13:54 haomaiwang joined #gluster
13:57 harish_ joined #gluster
13:58 a_ta joined #gluster
14:00 Sunghost anyone who can help joejulian - hagarth ?
14:01 haomaiwa_ joined #gluster
14:01 dgandhi joined #gluster
14:02 dgandhi joined #gluster
14:03 skylar joined #gluster
14:04 dgandhi joined #gluster
14:05 haomai___ joined #gluster
14:05 zhangjn joined #gluster
14:06 zhangjn joined #gluster
14:11 Gill joined #gluster
14:15 maserati joined #gluster
14:16 theron joined #gluster
14:18 haomaiwa_ joined #gluster
14:20 haomaiwa_ joined #gluster
14:22 ivan_rossi joined #gluster
14:23 cholcombe joined #gluster
14:29 nage joined #gluster
14:36 squizzi_ joined #gluster
14:36 rafi joined #gluster
14:37 haomaiw__ joined #gluster
14:38 Trefex joined #gluster
14:38 zhangjn joined #gluster
14:39 theron_ joined #gluster
14:39 Curdes joined #gluster
14:40 EinstCrazy joined #gluster
14:42 theron joined #gluster
14:42 ivan_rossi joined #gluster
14:47 theron_ joined #gluster
14:51 ayma joined #gluster
14:51 Curdes Is it normal for files to lock during rebalance and heal?
14:56 aravindavk joined #gluster
15:00 Sunghost i change package path from 3.5 to 3.6 but when i run install glusterfs-server it installed old one - idea?
15:01 haomaiwang joined #gluster
15:04 bennyturns joined #gluster
15:05 theron joined #gluster
15:07 XpineX joined #gluster
15:09 prg3 joined #gluster
15:14 pseudonymous joined #gluster
15:15 Sunghost ok i got it - both bricks online and working
15:16 pseudonymous Reading the entry on basic troubleshooting (& indeed from personal observations) I'm starting to believe that it's OK to remove the configuration files seen in /var/lib/glusterd and reinitialise all peers of the cluster (It seemed to work when I recreated the volume with the same settings) -- is this correct ? Doing so avoids a few problems w gluster & ansible that I'm seeing
15:18 hagarth joined #gluster
15:20 Sunghost should i run rebalance with fix-layout on distributed volume?
15:22 harish_ joined #gluster
15:23 wushudoin joined #gluster
15:25 overclk joined #gluster
15:26 halloo joined #gluster
15:27 Slashman joined #gluster
15:30 a_ta joined #gluster
15:31 dblack joined #gluster
15:38 calavera joined #gluster
15:39 stickyboy joined #gluster
15:41 a_ta_ joined #gluster
15:43 Gill joined #gluster
15:46 a_ta joined #gluster
15:48 a_ta joined #gluster
16:01 jwaibel joined #gluster
16:02 haomaiwa_ joined #gluster
16:02 wushudoin| joined #gluster
16:15 a_ta joined #gluster
16:16 jwd joined #gluster
16:19 theron joined #gluster
16:20 theron joined #gluster
16:21 theron joined #gluster
16:26 armyriad2 joined #gluster
16:26 haomaiwa_ joined #gluster
16:27 halloo Hi, i have a 2-node replicated store running v3.2 on Centos 6. Each node has 1 brick, which should be fully sync'd right now.
16:28 halloo can I upgrade to v3.6 by simply uninstalling the v3.2 packages, installing the v3.6 and creating a new volument by pointing to the existing bricks?
16:28 zerick_ joined #gluster
16:28 overclk joined #gluster
16:29 cabillman_ joined #gluster
16:30 a2 joined #gluster
16:30 telmich joined #gluster
16:30 telmich joined #gluster
16:30 PaulePanter joined #gluster
16:30 vincent_vdk joined #gluster
16:30 NuxRo joined #gluster
16:30 TheSeven joined #gluster
16:30 samppah joined #gluster
16:30 johnmark joined #gluster
16:31 armyriad joined #gluster
16:31 B21956 joined #gluster
16:33 Larsen_ joined #gluster
16:33 shruti joined #gluster
16:34 gem joined #gluster
16:34 Champi joined #gluster
16:34 ccoffey should 3.6.2 to 3.6.6 be an okay upgrade expierence? Planning on rolling updates across peers. Will do the clients first
16:34 sloop joined #gluster
16:35 Rapture joined #gluster
16:37 lbarfield joined #gluster
16:40 rafi joined #gluster
16:40 hagarth joined #gluster
16:40 jiffin joined #gluster
16:41 nbalacha joined #gluster
16:42 armyriad joined #gluster
16:43 bfoster joined #gluster
16:45 skoduri joined #gluster
16:46 r0di0n joined #gluster
16:48 Leildin halloo, you can do that sure, but you could also just upgrade the packages and keep your current volumes. why don't you want to do that ?
16:49 Leildin ccoffey, should be painless !
16:49 halloo can 3.2 be directly upgraded to 3.6?
16:49 Leildin ah wait I haden't seen that halloo
16:50 Leildin that might be a bit tricky
16:50 Leildin you could always try and if anything goes wrong you can reinstall from scratch
16:50 maserati joined #gluster
16:50 Leildin and reuse the bricks
16:51 halloo i have ~300GB of data, and want to minimize the downtime window
16:51 Leildin you have to have downtime
16:51 ivan_rossi left #gluster
16:52 Leildin 3.6 brings in nez features that require some downtime
16:52 halloo how much downtime?
16:57 halloo is there any official documentation about how to start a new v3.6 volume using existing bricks?
16:58 Gill joined #gluster
16:59 a_ta joined #gluster
17:01 haomaiwang joined #gluster
17:17 a_ta joined #gluster
17:22 a_ta joined #gluster
17:34 Leildin I had a full three hours of downtime with help from JoeJulian to help me get it fixed
17:34 tomatto joined #gluster
17:35 ackjewt joined #gluster
17:35 Leildin as for documentation I can't help except look through all of gluster.org
17:35 Leildin http://www.gluster.org/community/documentation/index.php/Upgrade_to_3.6
17:35 Leildin maybe this could be a start but I'm guessing you've already founbd it
17:37 halloo and what did JoeJulian do to help you get things "fixed"?
17:37 halloo what did you have to do?
17:39 overclk joined #gluster
17:40 TheSeven joined #gluster
17:42 RedW joined #gluster
17:49 Leildin I can't really remember
17:49 Leildin I think It was modification of volume files and something with the bricks like recalculate extended attributes or something
17:49 Leildin it was wweird
17:50 Leildin I only get weird bugs
17:50 Leildin I'm the odd one out all the time with gluster so take my advice with a massive grain of salt
17:51 halloo ok thanks. not a satisfactory answer, but it's not your fault :-P
17:51 Leildin the stuff we did was mostly read logs in /var/log/glusterd, read vol files in /var/lib/glusterd/vols
17:52 Leildin 3.6 brings a lot of stuff and coming from 3.2 will always be downtime
17:54 halloo one more question, do you know how to set up gluster to take advantage of two NIC cards per node?
17:54 halloo i'd like to set it up so that replication uses eth1 on each node and eth0 for clients
17:55 halloo say that node1 is 192.168.0.10 and 192.168.1.10 on eth0/1 respectively
17:55 halloo node2 is 192.168.0.11 / 192.168.1.11
17:56 halloo do i tell each node that its peer is the address on the 192.168.1.x network?
17:56 halloo and clients are then set up to connect on 192.168..0.x network?
17:58 social joined #gluster
17:58 halloo so on 192.168.0.10 say something like
17:58 halloo gluster peer probe 192.168.1.11
17:58 halloo and on clients you say
17:59 halloo moount -t glusterfs 192.168.0.10:/foo /mnt/foo
17:59 wushudoin joined #gluster
18:01 jbautista- joined #gluster
18:04 haomaiwang joined #gluster
18:05 F2Knight joined #gluster
18:07 social joined #gluster
18:13 JoeJulian halloo: replication is done from the clients.
18:14 haomaiwang joined #gluster
18:14 halloo JoeJulian how do I setup for max performance with two NICs per node?
18:15 JoeJulian That's a network-engineer question. I have people for that. ;)
18:16 halloo i'm looking at a presentation titled "gluster_stack.odp". the picture has a "Public Network" and a "Storage Network"
18:16 JoeJulian Some people have had success with some types of bonding.
18:16 halloo but there's no info on how that's configured
18:16 halloo also, that diagram is what lead me to believe that syncing was done by gluster?
18:16 JoeJulian dblack: ^^
18:16 JoeJulian That's your slide set.
18:17 halloo but now you're saying that the client does it?
18:17 * dblack wakes up
18:17 a_ta_ joined #gluster
18:18 deniszh joined #gluster
18:18 dblack JoeJulian, halloo : It's possible that's one of my slides... Can you point me to it to take a look-see?
18:18 a_ta_ joined #gluster
18:19 deniszh joined #gluster
18:19 dblack halloo: there are some reasons you might segregate "frontend" and "backend" networks, but it's not required
18:19 dblack halloo: most use cases will simply take advantage of bonding the multiple NICs for greater bandwidth
18:20 halloo http://www.gluster.org/community/documentation/images/9/9e/Gluster_for_Sysadmins_Dustin_Black.pdf
18:20 halloo slide 16
18:20 a_ta__ joined #gluster
18:20 dblack well, that's got my name on it, so.... :)
18:20 maveric_amitc_ joined #gluster
18:20 halloo i have 2-node, replicated setup
18:21 dblack halloo: that slide's an oldy but goody
18:21 halloo so, i thought that replication would happen on eth1 and client access would happen on eth0
18:21 dblack I have an updated graphic
18:21 dblack halloo: that's possible, but only sort-of
18:21 a_ta___ joined #gluster
18:21 dblack well, kinda sorta
18:22 dblack Replication is client-side, so it will happen over the client network regardless
18:22 dblack halloo: ^
18:22 dblack the storage network is really to isolate activity like self-heal and rebalance
18:23 dblack halloo: but also, if you're using NFS instead of the native client, you can effectively isolate replication to the storage network as well
18:23 doekia joined #gluster
18:24 halloo dblack: so is my setup above correct?
18:24 * dblack reads
18:24 a_ta joined #gluster
18:28 dblack halloo: ok.... you need to take advantage of hostnames for this to work correctly
18:28 dblack halloo: and still note that you won't isolate replication to the "backend" or "storage" network if you're using the native client
18:28 halloo ok, so bonding is best?
18:29 dblack halloo: you need to peer probe by hostname (and ensure you re-probe the first node by hostname since it will default to an IP)
18:30 halloo ok
18:30 dblack halloo: the nodes should resolve the hostname to the "backend" IP
18:30 dblack and the clients should resolve the hostnames to the "frontend" IP
18:30 dblack so you have a split name resolution to make this work
18:30 calavera joined #gluster
18:30 halloo ahh. i can use /etc/hosts for that i guess
18:31 dblack This setup will keep rebalance, self-heal, and NFS traffic isolated
18:31 dblack but
18:32 nathwill joined #gluster
18:32 dblack you should make careful consideration of whether that isolation is better than just having a single bonded network
18:32 dblack it will be use-case dependent
18:33 halloo dblack: I was talking to JoeJulian above about upgrading from v3.2 to v3.6
18:34 halloo i'm thinking it's best to just uninstall 3.2 and install the new v3.6 RPMs, setting up a new volume using the existing briks
18:34 halloo bricks
18:34 halloo would that work?
18:35 halloo i would use rsync to make sure that both bricks are identical before bringing up the new volume
18:39 haomaiwang joined #gluster
18:44 hagarth joined #gluster
18:44 JoeJulian halloo: I wouldn't waste my time rsync'ing.
18:45 JoeJulian I would just stop the volume(s), stop glusterd, upgrade the rpms, start glusterd, start the volume(s).
18:45 halloo JoeJulian, sounds like a plan. I was just trying to see if dblack had any other information.
18:45 JoeJulian Since you're staring on such an old version, I would consider running "gluster volume heal $volname full" after it's back up.
18:45 clutchk joined #gluster
18:48 dblack halloo: I think what JoeJulian says ^^ is your best route
18:49 dblack AFAIK the brick metadata will be compatible between versions, so you should be able to effectively upgrade the Gluster bits around the data.
18:49 dblack but that's not gospel, so test it. :)
18:51 halloo ok thanks
18:56 julim joined #gluster
19:05 rwheeler joined #gluster
19:06 ramky joined #gluster
19:09 JoeJulian <sigh> file a bug
19:09 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
19:18 ctrianta joined #gluster
19:36 mhulsman joined #gluster
19:57 halloo joined #gluster
20:01 calavera joined #gluster
20:14 a_ta joined #gluster
20:22 jmarley joined #gluster
20:26 arcolife joined #gluster
20:33 cabillman joined #gluster
20:46 kminooie left #gluster
20:51 mhulsman1 joined #gluster
20:54 a_ta joined #gluster
20:56 tomatto joined #gluster
20:58 calavera joined #gluster
21:02 Rapture joined #gluster
21:10 halloo joined #gluster
21:13 cliluw joined #gluster
21:15 togdon joined #gluster
21:26 dblack joined #gluster
21:28 a_ta_ joined #gluster
21:41 stickyboy joined #gluster
21:45 shaunm joined #gluster
21:49 calavera joined #gluster
21:50 kminooie joined #gluster
21:59 kminooie so I am having a problem. here is the back story of things that I think are relevant. I don't know when this has happened, but when I updated to 3.6.6 I noticed that the 'fetch-attempts' option has been removed from the client. now I having this situation that if something goes wrong in the cluster the client just gets locked down and does not release the mount point or fail under no circumstances, ( this is the latest snip
22:06 dijuremo2 joined #gluster
22:06 kminooie JoeJulian: ^^^^ help me
22:07 dijuremo2 Need some help, ran updates on my gluster peer used for quorum and now it cannot connect to the others. I am having problem trying to find the pacakges for 3.7.3 which is what I was running. This gluster node is running Kubuntu 14.04
22:07 DV joined #gluster
22:08 dijuremo2 The updates applied the latest version 3.7.5 and gluster v status fails...
22:09 dijuremo2 root@ysmbackups:~# gluster v status
22:09 dijuremo2 Staging failed on 10.0.1.7. Please check log file for details.
22:10 dijuremo2 Staging failed on 10.0.1.6. Please check log file for details.
22:14 kminooie dijuremo2: I am not really an expert here, but what is in the log file?
22:15 dijuremo2 Hmmm which log file?
22:15 dijuremo2 There are so many...
22:16 kminooie the ones you would care about here, I guess, would be etc-glusterfs-glusterd.vol.log and glustershd.log
22:17 kminooie which are in /var/log/gluster on your node machines
22:17 dijuremo2 On the new node or other nodes?
22:17 kminooie Staging failed on 10.0.1.7. Please check log file for details.
22:18 kminooie taging failed on 10.0.1.6. Please check log file for details
22:20 JoeJulian kminooie: Problem solved! fetch-attempts was never removed.
22:21 JoeJulian https://github.com/gluster/glusterfs/blob/release-3.6/xlators/mount/fuse/utils/mount.glusterfs.in#L437
22:21 glusterbot Title: glusterfs/mount.glusterfs.in at release-3.6 · gluster/glusterfs · GitHub (at github.com)
22:21 ira joined #gluster
22:22 dijuremo2 Would you remind me the comand to upload stuff to paste bin?
22:22 JoeJulian @paste
22:22 glusterbot JoeJulian: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
22:23 kminooie dijuremo2: or just go to fpaste.org  and copy paste them. you don't really need to post the entire file. just find the relevant messages.
22:24 dijuremo2 http://termbin.com/kerl
22:25 dijuremo2 I saw a similar issue mentioned on the mailing list when upgrading to 3.7.5 but no resolution...
22:25 dijuremo2 Anybody knows how I could dig the 3.7.3 binaries from PPA?
22:25 arielb joined #gluster
22:25 kminooie JoeJulian: I am gonna try it agian, but couple of days ago when I did the upgrade, it was giving me error about fetch-attempts and I had to remove it from the mount options before I was able to mount the volume. but I am gonna test again right now
22:26 JoeJulian mount.glusterfs is just a shell script. You can easily debug it as you would any shell script. Run it directly instead of going through mount.
22:28 dijuremo2 This is the 3.7.5 upgrade issue I am also hitting -> https://www.mail-archive.com/gluster-users@gluster.org/msg22255.html
22:28 glusterbot Title: [Gluster-users] 3.7.5 upgrade issues (at www.mail-archive.com)
22:29 dijuremo2 In my case, I have three nodes, two have bricks, the third one is the one that is used for quorum...
22:31 dijuremo2 I was able to find this, but the binaries are gone... https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7/+build/7733320
22:31 glusterbot Title: amd64 build of glusterfs 3.7.3-ubuntu1~trusty1 : glusterfs-3.7 : “Gluster” team (at launchpad.net)
22:31 JoeJulian Edit /etc/glusterfs/glusterd.vol to contain this line: option rpc-auth-allow-insecure on
22:35 dijuremo2 JoeJulian, that eas already in the other nodes, not in the one that updated to 3.7.5 still no luck
22:36 dijuremo2 root@ysmbackups:/var/log/glusterfs# grep insecure /etc/glusterfs/glusterd.vol
22:36 dijuremo2 option rpc-auth-allow-insecure on
22:36 dijuremo2 root@ysmbackups:/var/log/glusterfs# gluster v status
22:36 dijuremo2 Staging failed on 10.0.1.7. Please check log file for details.
22:36 dijuremo2 Staging failed on 10.0.1.6. Please check log file for details.
22:36 dijuremo2 On the other node, the command says:
22:36 dijuremo2 [root@ysmha01 glusterfs]# gluster v status
22:36 dijuremo2 Staging failed on 10.0.1.5. Error: Volume name get failed
22:37 dijuremo2 Is it possible the option from etc did not take?
22:37 dijuremo2 [root@ysmha01 glusterfs]# gluster volume get export all | grep auth
22:37 dijuremo2 auth.allow                              *
22:37 dijuremo2 auth.reject                             (null)
22:37 dijuremo2 auth.ssl-allow                          *
22:37 dijuremo2 nfs.rpc-auth-unix                       on
22:37 dijuremo2 nfs.rpc-auth-null                       on
22:37 dijuremo2 nfs.rpc-auth-allow                      all
22:37 dijuremo2 nfs.rpc-auth-reject                     none
22:37 dijuremo2 nfs.exports-auth-enable                 (null)
22:37 dijuremo2 nfs.auth-refresh-interval-sec           (null)
22:37 dijuremo2 nfs.auth-cache-ttl-sec                  (null)
22:38 JoeJulian No.
22:41 dijuremo2 Just so that I understand, when I get the settings as shown above what should rpc-auth-allow-insecure translate into? Or is that a runtime option that I would see via ps ?
22:41 JoeJulian Don't paste in channel.
22:41 JoeJulian The options set through etc are not volume options, they're management options.
22:41 JoeJulian So as long as you restarted glusterd when you set it, it's in effect.
22:43 JoeJulian A word of advice: When sharing log files you would like someone else to help you debug, truncate the file, cause the problem, then paste it. It's hard to decypher scores of repeated attempts.
22:43 dijuremo2 I only have the two peers with the bricks up at the moment, if I were to restart glusterd to make sure, that would freeze one of the bricks since quorum is not met, right?
22:44 JoeJulian I think so.
22:44 dijuremo2 And if it does not come up nicely, then all my stuff will go down.. ugh...
22:45 dijuremo2 I believe I had the options there from when I tried the 3.6.x to 3.7.3 upgrade, but I could never get 3.6.x to co-exist with 3.7.3
22:45 JoeJulian Sorry, they messed with that so I don't know its behavior now, nor any way to override the behavior.
22:45 JoeJulian wrt quorum
22:47 tomatto joined #gluster
22:50 arielb joined #gluster
22:50 dijuremo2 Anyone has a suggestion on how I can find the 3.7.3 from the PPA?
22:52 kminooie https://launchpad.net/~gluster
22:52 glusterbot Title: Gluster in Launchpad (at launchpad.net)
22:52 dijuremo2 kminooie: https://launchpad.net/~gluster/+archive/ubuntu/glusterfs-3.7/+build/7733320 but no binaries... :(
22:52 glusterbot Title: amd64 build of glusterfs 3.7.3-ubuntu1~trusty1 : glusterfs-3.7 : “Gluster” team (at launchpad.net)
22:54 kminooie sorry I forgot about the ppa for a sec. you might be able to use debian packages, but I am not necessary recommending that.
22:56 theron joined #gluster
22:56 kminooie those you can get from http://download.gluster.org/pub/gluster/glusterfs/3.7/3.7.3/Debian/jessie/apt/pool/main/g/glusterfs/
22:56 glusterbot Title: Index of /pub/gluster/glusterfs/3.7/3.7.3/Debian/jessie/apt/pool/main/g/glusterfs (at download.gluster.org)
22:57 dijuremo2 I had found those, but just not built for trusty specifically, so I guess use as a last resort...
22:58 dijuremo2 I am actually not so happy with 3.7.x have issues with samba producing core dumps. 3.6.x was good. Would it bee too crazy to stop all peers and simply downgrade to the latest 3.6.x ?
22:59 dijuremo2 I went to 3.7.x seeking improvements in small file performance, but did not get any luck with 3.7.x
23:00 kminooie JoeJulian: ^^^^ is downgrade possible?
23:04 dijuremo2 Cause going up to 3.7.5 is also not going to help my issue with samba vfs per:   https://bugzilla.redhat.com/show_bug.cgi?id=1234877
23:04 glusterbot Bug 1234877: high, medium, ---, rhs-smb, NEW , Samba crashes with 3.7.4 and VFS module
23:04 JoeJulian dijuremo2: Do you use quota?
23:05 dijuremo2 No, I do not use quota, never have enabled it
23:05 JoeJulian then downgrade should be fine.
23:06 dijuremo2 http://termbin.com/xbmm
23:06 dijuremo2 That should confirm I will be OK?
23:08 JoeJulian I've not done a downgrade myself, but the only metadata structure changes that I would expect to cause trouble is the quota changes. Since you're not using them I would expect it to be ok.
23:09 JoeJulian ... as just another user with a bit more experience, not a developer.
23:09 dijuremo2 Would you go up to 3.7.5 then, rather than down to 3.6.x?
23:10 JoeJulian Not according to the status of that bug report.
23:10 tessier joined #gluster
23:10 dijuremo2 I am seeing those issues already, but not noticing any problems other than the crashes and smbd re-spawns
23:11 dijuremo2 However, it's been a while an it has not been fixed... so I guess if I am taking all the servers offline, play safe and go back to 3.6.x right?
23:18 JoeJulian Sure
23:22 kminooie JoeJulian: I feel like an idiot. fetch-attempts is working fine. I don't know what the error msg that I was getting was. I need a vacation. thanks thou
23:27 JoeJulian Glad to hear it. :)
23:30 mlhamburg__ joined #gluster
23:30 kminooie JoeJulian: while I have you, is rolling upgrade possible with gluster? I never had to try this one before. can upgrade from 3.3.1 to 3.6.6 ( which include upgrading from squeeze to wheezy ) one node at a time without shunting down the cluster or not?
23:39 JoeJulian No. There's an rpc change in the 3.4 series that will interfere with that.
23:41 malevolent joined #gluster
23:44 kminooie thanks joe, you are the best.
23:58 calavera joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary