Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-09-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 vbellur joined #gluster
00:28 shyam joined #gluster
00:35 renout joined #gluster
00:40 jobewan joined #gluster
01:07 shyam joined #gluster
01:20 stoff1973 joined #gluster
01:22 rdanter joined #gluster
01:55 ilbot3 joined #gluster
01:55 Topic for #gluster is now Gluster Community - https://www.gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - https://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:58 Gambit15 joined #gluster
02:17 Neoon I ran into a mouting issue on boot, maybe someone has an idea? https://unix.stackexchange.com/questions/393307/mount-glusterfs-with-systemd-on-boot
02:17 glusterbot Title: Mount glusterfs with Systemd on BOOT - Unix & Linux Stack Exchange (at unix.stackexchange.com)
02:36 jiffin joined #gluster
02:57 kramdoss_ joined #gluster
03:22 gyadav joined #gluster
03:22 skoduri_ joined #gluster
03:33 aravindavk joined #gluster
03:46 itisravi joined #gluster
03:50 nbalacha joined #gluster
04:17 atinm joined #gluster
04:29 Shu6h3ndu joined #gluster
04:42 sunnyk joined #gluster
04:46 atinm joined #gluster
04:51 tru_tru joined #gluster
04:56 g_work joined #gluster
05:01 poornima joined #gluster
05:04 ppai joined #gluster
05:08 sanoj joined #gluster
05:11 xavih joined #gluster
05:13 sanoj joined #gluster
05:18 ndarshan joined #gluster
05:24 karthik_us joined #gluster
05:33 susant joined #gluster
05:41 hgowtham joined #gluster
05:42 Prasad joined #gluster
05:49 skumar joined #gluster
05:50 susant joined #gluster
05:51 Saravanakmr joined #gluster
05:52 msvbhat joined #gluster
05:57 nishanth joined #gluster
05:58 BlackoutWNCT joined #gluster
06:08 karthik_us joined #gluster
06:12 jtux joined #gluster
06:17 nbalacha joined #gluster
06:19 jkroon joined #gluster
06:23 sac joined #gluster
06:28 gyadav_ joined #gluster
06:32 BlackoutWNCT Hey guys, having an issue with the Samba VFS module for gluster on ubuntu and am hoping someone can point me in the right direction.
06:32 BlackoutWNCT Basically, when using the VFS config for Samba, that shares are all inaccessible, however not using the VFS works without issue.
06:33 BlackoutWNCT I've done a full dump of logs and various pieces of info I have available here: https://paste.ubuntu.com/25577859/
06:33 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
06:33 BlackoutWNCT If anyone has any suggestions, I'd love to hear them.
06:34 gyadav__ joined #gluster
06:38 dominicpg joined #gluster
06:43 xavih joined #gluster
06:46 aravindavk joined #gluster
06:49 gyadav_ joined #gluster
06:53 kdhananjay joined #gluster
06:54 hgowtham joined #gluster
07:02 mbukatov joined #gluster
07:05 rwheeler joined #gluster
07:19 apandey joined #gluster
07:27 ppai joined #gluster
07:28 sanoj joined #gluster
07:30 decayofmind joined #gluster
07:34 brayo joined #gluster
07:34 brayo joined #gluster
07:46 gyadav__ joined #gluster
07:47 fsimonce joined #gluster
07:48 _KaszpiR_ joined #gluster
07:50 ppai joined #gluster
08:16 itisravi joined #gluster
08:22 Wizek_ joined #gluster
08:30 msvbhat joined #gluster
08:33 skoduri joined #gluster
08:45 Wizek_ joined #gluster
08:45 susant joined #gluster
08:51 sanoj joined #gluster
08:54 sanoj joined #gluster
08:58 ppai joined #gluster
09:00 buvanesh_kumar joined #gluster
09:01 Saravanakmr joined #gluster
09:04 foster joined #gluster
09:05 [diablo] joined #gluster
09:09 jiffin1 joined #gluster
09:12 d4n13L joined #gluster
09:15 d4n13L joined #gluster
09:19 atinm joined #gluster
09:26 MrAbaddon joined #gluster
09:26 msvbhat joined #gluster
09:30 jiffin1 joined #gluster
09:54 msvbhat joined #gluster
10:01 poornima joined #gluster
10:05 nh2 joined #gluster
10:23 ThHirsch joined #gluster
10:25 shyam joined #gluster
10:28 nh2 joined #gluster
10:31 skumar_ joined #gluster
10:38 poornima joined #gluster
10:43 dijuremo joined #gluster
10:46 dijuremo joined #gluster
10:47 skumar_ joined #gluster
11:03 atinm joined #gluster
11:03 skumar__ joined #gluster
11:06 jiffin1 joined #gluster
11:07 baber joined #gluster
11:11 ppai joined #gluster
11:15 jiffin1 joined #gluster
11:24 gyadav_ joined #gluster
11:33 ppai joined #gluster
12:06 Shu6h3ndu joined #gluster
12:40 nbalacha joined #gluster
12:42 vbellur1 joined #gluster
12:43 vbellur joined #gluster
12:44 vbellur joined #gluster
12:46 gyadav_ joined #gluster
12:53 ppai joined #gluster
12:56 vbellur joined #gluster
12:57 vbellur joined #gluster
12:58 vbellur joined #gluster
12:58 vbellur joined #gluster
12:59 vbellur1 joined #gluster
13:05 baber joined #gluster
13:05 shyam joined #gluster
13:22 shaunm joined #gluster
13:26 skylar joined #gluster
13:30 msvbhat joined #gluster
13:34 fcoelho joined #gluster
13:36 fcoelho joined #gluster
13:37 fcoelho joined #gluster
13:42 plarsen joined #gluster
13:46 ppai joined #gluster
13:50 atinm|brb joined #gluster
13:51 jefarr joined #gluster
14:00 farhorizon joined #gluster
14:01 _KaszpiR_ joined #gluster
14:02 hmamtora joined #gluster
14:10 jtux joined #gluster
14:12 baber joined #gluster
14:19 shyam joined #gluster
14:22 kotreshhr joined #gluster
14:24 dominicpg joined #gluster
14:34 susant joined #gluster
14:36 kpease joined #gluster
14:45 omie88877777 joined #gluster
14:52 jstrunk joined #gluster
14:55 jiffin joined #gluster
15:00 farhorizon joined #gluster
15:01 farhorizon joined #gluster
15:03 ppai joined #gluster
15:04 shyam joined #gluster
15:07 bcasaleiro joined #gluster
15:07 bcasaleiro Hi there
15:07 bcasaleiro I am looking for some help if possible
15:12 gyadav_ joined #gluster
15:15 wushudoin joined #gluster
15:22 farhorizon joined #gluster
15:24 bcasaleiro Any idea why a volume create would return "Host <host> is not in ' Peer in Cluster' state" if the peer probe successed and peer status says it is in Peer Cluster?
15:27 jbrooks joined #gluster
15:30 Shu6h3ndu joined #gluster
15:32 farhoriz_ joined #gluster
15:34 _KaszpiR_ joined #gluster
15:47 leifmadsen joined #gluster
15:48 jiffin joined #gluster
15:51 nh2 joined #gluster
15:54 baber joined #gluster
15:54 dominicpg joined #gluster
15:55 jiffin joined #gluster
16:01 MrAbaddon joined #gluster
16:02 Wayke91 joined #gluster
16:03 major joined #gluster
16:05 aravindavk joined #gluster
16:13 shaunm joined #gluster
16:13 baber joined #gluster
16:31 MrAbaddon joined #gluster
16:54 farhorizon joined #gluster
16:57 gyadav_ joined #gluster
17:08 renihs joined #gluster
17:17 dijuremo joined #gluster
17:18 mahendratech joined #gluster
17:28 MrAbaddon joined #gluster
17:33 gluster_FAN1 joined #gluster
17:34 msvbhat joined #gluster
17:34 gluster_FAN1 hello! I would like to ask a question. I setup glusterfs and when I rebooted system all nodes went to disconnected state. How is this possible?
17:34 g_work joined #gluster
17:35 JoeJulian quorum?
17:35 gluster_FAN1 https://pastebin.com/eEUpdwXE
17:35 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
17:36 gluster_FAN1 [2017-09-20 16:44:41.278734] C [MSGID: 106002] [glusterd-server-quorum.c:356:glusterd_do_volume_quorum_action] 0-management: Server quorum lost for volume engine. Stopping local bricks.
17:36 gluster_FAN1 I get this message from systemctl
17:39 JoeJulian Well, replica 3 with q quorum-ratio of >50% should be good with losing one server - so it must not be connected to all the servers. Check peer and volume status?
17:41 gluster_FAN1 https://pastebin.com/diePR0YX <- here is both. After reboot went to this state, I don't know what to do...
17:41 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
17:44 vbellur joined #gluster
17:44 gluster_FAN1 I have 3 hosts and my intention was to make highly available glusterfs storage. I want to be able to access it even if 2 out of 3 hosts are down.
17:45 gluster_FAN1 gluster volume create vm-data replica 3 transport tcp master-xeon.private.fi:/bricks/vm-data gpu.private.fi:/bricks/vm-data nogpu.private.fi:/bricks/vm-data force
17:45 gluster_FAN1 ^is this wrong for doing so?
17:46 JoeJulian Volume is already created. You just need to start all the glusterd again.
17:46 JoeJulian Then, if you don't want quorum (you'll be at risk for split-brain) you'll need to disable it.
17:47 JoeJulian cluster.server-quorum-type=none
17:48 gluster_FAN1 https://paste.fedoraproject.org/paste/YI4RCJ984-jzlkL6MeJURQ
17:48 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
17:48 gluster_FAN1 If I disable quorum, I can have storage similar to RAID6 splitted over 3 hosts?
17:49 JoeJulian no
17:49 gluster_FAN1 no? Then how can I do it?
17:51 JoeJulian I don't know of anything that does that. Gluster does have a parity volume type, but that's closer to raid5 with added network latency.
17:52 msvbhat joined #gluster
17:52 gluster_FAN1 really? I had earlier setup glusterfs with gdeploy and it said "RAID6" and storage was available even when 2 out of 3 hosts were down. But then I wanted to make glusterfs myself and I cannot make the same thing happen
17:54 MrAbaddon joined #gluster
17:54 JoeJulian <sigh> I need to have words with people. Raid6 indeed. After we've been working for years to avoid those kinds of comparisons.
17:55 JoeJulian So there's replica volumes, distribute volumes, even disperse volumes.
17:56 JoeJulian and you can combine those to make larger or more resilient volumes.
17:56 JoeJulian But when you have 3 nodes, two of them should stay up or you have no guarantee of consistency.
17:57 renihs wouldnt replicated still be consistent with just one node? just not very rendundant
17:57 JoeJulian So, rather than starting with trying to define something that has parity with a traditional single-computer storage - what's your use case and SLA?
17:57 gluster_FAN1 ok...so now that I had rebooted all of the three nodes at the same time and my volume is stuck in disconnected. Is there an easy fix for it?
17:58 JoeJulian renihs: If you could guarantee that the other two nodes were not involved in storage transactions then yes.
17:58 _KaszpiR_ joined #gluster
17:58 JoeJulian If you have a netsplit, or you rotate through which server is up, you'll create splitbrain.
17:58 JoeJulian gluster_FAN1: Try "gluster volume start $volname force"
17:59 gluster_FAN1 volume start: engine: failed: Quorum not met. Volume operation not allowed.
17:59 renihs if all the nodes are on the same network, that should mitigate split brain though, but good to know that transactions are not atomic
17:59 gluster_FAN1 YES! they are all on the same network
18:00 JoeJulian It's the no-metadata design that eliminates that spof and complexity, but then it cannot log transactions and, instead, logs anti-transactions.
18:01 renihs anti-transactions?
18:01 JoeJulian gluster_FAN1: does peer status still show disconnected?
18:01 gluster_FAN1 State: Peer in Cluster (Disconnected)
18:01 gluster_FAN1 both hosts disconnected
18:04 JoeJulian renihs: client: "I'm going to write to this file on you three servers." servers: "Ok, I'll mark the file that there's a pending write that the other two might not know about." server2 dies. client writes. server1 and 3 reset their marks for each other - but retain the state for server2, ie "server2 has missed N transactions". There's no log to play back but when server2 comes back, the self-heal daemon does the sync and resets those flags.
18:05 kotreshhr left #gluster
18:05 JoeJulian gluster_FAN1: Check glusterd logs and firewall (iptables maybe).
18:09 renihs JoeJulian: i see... thanks for the explanation
18:09 gluster_FAN1 https://paste.fedoraproject.org/paste/jJBZqULhpQJMPGUBPZVhmQ
18:09 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
18:10 JoeJulian There's a bunch of more complicated ways of creating split-brain. The devs have done a great job at preventing most of them.
18:11 JoeJulian gluster_FAN1: connection to 192.168.1.102:24007 failed (No route to host); disconnecting socket
18:11 JoeJulian There's another one in there too.
18:12 JoeJulian Looks like a network problem.
18:12 renihs two of them are not reachable
18:12 renihs quorum not met, what type is that?
18:12 gluster_FAN1 what? but I enabled their local dns addresses...do I have to do same for ip?
18:13 renihs no route to host could also be just "reject"
18:13 renihs firewall or like
18:13 JoeJulian Ok, meetings... bbl.
18:13 mahendratech joined #gluster
18:17 gluster_FAN1 you were right!
18:17 renihs hmm is there a way to have a glusternode (that just acts as replica, no read/write access ever) not mount the brick?
18:17 gluster_FAN1 I had enabled connection to localdomain address, but it was not enough
18:18 gluster_FAN1 State: Peer in Cluster (Connected)
18:21 gluster_FAN1 oh....one more thing ^_^:
18:21 gluster_FAN1 Mount failed. Please check the log file for more details.
18:22 gluster_FAN1 https://paste.fedoraproject.org/paste/mquQzwat7qznz7iuMH4qOQ
18:22 glusterbot Title: Untitled - Modern Paste (at paste.fedoraproject.org)
18:26 baber joined #gluster
18:30 msvbhat joined #gluster
19:05 gluster_FAN1 thanks for helpping with connection problem, maybe some day I'll figure out the mount problem too
19:11 bwerthmann joined #gluster
19:27 mahendratech joined #gluster
19:36 * PatNarciso is thinking about setting up (production testing... is this a thing) a replicated vol over WAN.
19:37 PatNarciso I'm favoring stright replicated over geo-replicated.  3 VPServers, 1 brick each.  Miami, Atlanta, NYC (maybe NJ...)
19:37 PatNarciso anyone forsee anything I'm going to regret about this -- for a small-file, heavy read workload?
19:42 * PatNarciso reads 'deploying in aws' as there are some items here that apply.
19:46 baber joined #gluster
19:59 foobert Suppose I have a shared block storage environment (FC SAN). Does glusterfs have any provisions for dealing with this in context of creating an HA nfs head on top of this storage?
20:05 vbellur joined #gluster
20:05 farhorizon joined #gluster
20:06 vbellur joined #gluster
20:07 kraynor5b joined #gluster
20:08 vbellur joined #gluster
20:14 mahendratech joined #gluster
20:21 kraynor5b_ joined #gluster
20:22 kraynor5b__ joined #gluster
20:59 mahendratech joined #gluster
21:03 baber joined #gluster
21:25 bwerthmann joined #gluster
21:28 mahendratech joined #gluster
21:49 gospod2 joined #gluster
21:54 mahendratech joined #gluster
21:54 farhorizon joined #gluster
21:54 wushudoin joined #gluster
22:16 ThHirsch joined #gluster
22:31 vbellur joined #gluster
22:42 farhorizon joined #gluster
22:47 nh2 joined #gluster
23:15 gospod2 joined #gluster
23:19 plarsen joined #gluster
23:30 farhorizon joined #gluster
23:39 shyam joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary