Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-09-20

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:15 bitchecker joined #gluster
00:58 rafi joined #gluster
01:03 ankitraj joined #gluster
01:15 shdeng joined #gluster
01:33 Lee1092 joined #gluster
01:34 jeremyh joined #gluster
01:47 ilbot3 joined #gluster
01:47 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
02:06 hackman joined #gluster
02:18 harish joined #gluster
02:23 rafi joined #gluster
03:00 kimmeh joined #gluster
03:04 Gambit15 joined #gluster
03:07 nishanth joined #gluster
03:15 Mmike joined #gluster
03:23 nbalacha joined #gluster
03:44 jiffin joined #gluster
03:58 beemobile joined #gluster
04:03 prth joined #gluster
04:05 jiffin joined #gluster
04:09 itisravi joined #gluster
04:15 riyas joined #gluster
04:17 RameshN joined #gluster
04:24 kramdoss_ joined #gluster
04:25 jkroon joined #gluster
04:32 atinm joined #gluster
04:34 kdhananjay joined #gluster
04:35 shubhendu joined #gluster
04:40 Philambdo joined #gluster
04:40 kdhananjay joined #gluster
04:45 msvbhat joined #gluster
04:46 rafi joined #gluster
04:52 prasanth joined #gluster
04:55 Muthu joined #gluster
04:57 squeakyneb joined #gluster
05:14 deniszh joined #gluster
05:20 kotreshhr joined #gluster
05:22 ankitraj joined #gluster
05:23 morse joined #gluster
05:30 ndarshan joined #gluster
05:30 shdeng joined #gluster
05:33 karthik_ joined #gluster
05:37 satya4ever joined #gluster
05:38 mhulsman joined #gluster
05:39 msvbhat joined #gluster
05:41 prth joined #gluster
05:46 ppai joined #gluster
05:54 arcolife joined #gluster
05:58 nishanth joined #gluster
06:06 devyani7 joined #gluster
06:12 cvstealth joined #gluster
06:19 Saravanakmr joined #gluster
06:19 Saravanakmr_ joined #gluster
06:37 jtux joined #gluster
06:41 mhulsman joined #gluster
06:44 k4n0 joined #gluster
06:47 harish joined #gluster
06:52 jtux joined #gluster
06:54 jri joined #gluster
07:02 kimmeh joined #gluster
07:04 robb_nl joined #gluster
07:05 ankitraj joined #gluster
07:05 xi4ohui joined #gluster
07:07 xi4ohui Hi guys, how should I delete the volume with some downed peers
07:08 xi4ohui it tell me that: volume delete: images: failed: Some of the peers are down
07:17 fsimonce joined #gluster
07:23 kshlm xi4ohui, You can do a force delete `gluster volume delete <name> force`
07:25 kshlm Deleting volumes when peers are down is disallowed because they don't get deleted on the down peers, and when those peers come back up, the stale volumes get added back into the cluster.
07:26 kshlm So, if you cannot get the peers that are down online again, do a force delete. If you can get them back online, get them back online and do a normal delte.
07:33 k4n0 joined #gluster
07:33 deniszh joined #gluster
07:34 xi4ohui kshlm, thanks
07:39 xi4ohui @kshlm, force delete is unsuccessful, it returns `volume stop: images: failed: Another transaction is in progress for images. Please try again after sometime.`
07:40 xi4ohui I am sure that there isn't have another operations on that volume.
07:40 jtux left #gluster
07:42 jkroon joined #gluster
07:42 xi4ohui I found the error log *[2016-09-20 07:40:53.285436] E [MSGID: 106170] [glusterd-handshake.c:1060:​gd_validate_mgmt_hndsk_req] 0-management: Rejecting management handshake request from unknown peer 10.21.244.16:49112* on the `etc-glusterfs-glusterd.vol.log` file
07:47 prth joined #gluster
07:48 hackman joined #gluster
07:49 derjohn_mob joined #gluster
07:55 apandey joined #gluster
08:00 jtux joined #gluster
08:03 siel joined #gluster
08:18 ieth0 joined #gluster
08:19 ahino joined #gluster
08:22 [diablo] joined #gluster
08:26 Slashman joined #gluster
08:28 xavih joined #gluster
08:28 malevolent joined #gluster
08:33 RK joined #gluster
08:34 RK hi everyone, is there a set of best practices for running gluster at scale in production?
08:34 RK links and pointers appreciated
08:38 malevolent joined #gluster
08:38 xavih joined #gluster
08:42 sanoj joined #gluster
08:42 social joined #gluster
08:55 Philambdo joined #gluster
08:58 derjohn_mob joined #gluster
09:03 k4n0 joined #gluster
09:19 jri_ joined #gluster
09:30 arcolife joined #gluster
09:31 karthik_ joined #gluster
09:33 mhulsman joined #gluster
09:37 jri joined #gluster
09:59 ashiq joined #gluster
10:16 derjohn_mob joined #gluster
10:18 jri_ joined #gluster
10:22 loadtheacc joined #gluster
10:22 msvbhat joined #gluster
10:24 malevolent joined #gluster
10:25 xavih joined #gluster
10:29 Wizek joined #gluster
10:30 k4n0 joined #gluster
10:32 ramky joined #gluster
10:35 hgowtham joined #gluster
10:36 Muthu_ joined #gluster
10:42 arcolife joined #gluster
10:49 mhulsman1 joined #gluster
10:56 kotreshhr joined #gluster
11:14 derjohn_mob joined #gluster
11:14 shubhendu joined #gluster
11:23 devyani7 joined #gluster
11:25 devyani7 joined #gluster
11:27 hchiramm joined #gluster
11:32 msvbhat joined #gluster
11:34 loadtheacc joined #gluster
11:36 ndarshan joined #gluster
11:40 baojg joined #gluster
11:45 nishanth joined #gluster
11:48 ankitraj joined #gluster
11:48 poornima joined #gluster
11:52 johnmilton joined #gluster
11:54 skoduri joined #gluster
11:54 johnmilton joined #gluster
11:56 hgowtham joined #gluster
12:00 msvbhat joined #gluster
12:03 pdrakeweb joined #gluster
12:03 shubhendu joined #gluster
12:09 karnan joined #gluster
12:33 [diablo] joined #gluster
12:40 unclemarc joined #gluster
12:41 arcolife joined #gluster
12:43 sim0nx joined #gluster
12:45 sim0nx Hi ... I have a working smb+glusterfs-vfs setup. my problem is that when 1 user opens a file, it is impossible for a 2nd user to open the same file, even read-only -> "Input/output error"
12:45 sim0nx has anybody experienced this before and may be able to help me please ?
12:46 sim0nx samba version 4.2.10 ... glusterfs 3.7.11
12:52 ieth0 joined #gluster
13:00 pasik joined #gluster
13:00 pasik hello
13:00 glusterbot pasik: Despite the fact that friendly greetings are nice, please ask your question. Carefully identify your problem in such a way that when a volunteer has a few minutes, they can offer you a potential solution. These are volunteers, so be patient. Answers may come in a few minutes, or may take hours. If you're still in the channel, someone will eventually offer an answ
13:01 pasik centos7 with gluster37.. how do I properly stop gluster brick daemons? I want to do some maintenance and stop *all* the gluster related stuff on the server
13:01 pasik it seems stopping the default glusterd and glusterfsd services doesn't stop all the processes..
13:02 Klas correct, the brick processes live on
13:02 pasik but I need to stop them. what's the correct method?
13:02 pasik just kill them?
13:02 pasik (how are they stopped on shutdown/reboot?)
13:03 Klas not sure =)
13:03 Klas i usually kill them
13:03 rastar joined #gluster
13:03 jiffin joined #gluster
13:04 kimmeh joined #gluster
13:04 Klas https://bugzilla.redhat.co​m/show_bug.cgi?id=1022542
13:04 glusterbot Bug 1022542: unspecified, unspecified, ---, ndevos, CLOSED ERRATA, glusterfsd stop command does not stop bricks
13:04 Klas relevant thread
13:05 pasik Klas: yep, already found that..
13:05 Klas I am also interested, btw =)
13:05 jkroon i also just normally kill them.
13:06 ieth0 joined #gluster
13:06 jkroon but currently it's definitely an overly manual situation.
13:07 markd__ joined #gluster
13:08 pasik jkroon: ok, thanks
13:09 msvbhat joined #gluster
13:09 mreamy joined #gluster
13:10 Klas we will probably handle updates by running the update on one server, reboot it, then when it's up, take the next one
13:10 Klas that is our generally preferred method regardless
13:13 hgowtham joined #gluster
13:14 bowhunter joined #gluster
13:19 R0ok_ joined #gluster
13:23 virusuy joined #gluster
13:26 prth joined #gluster
13:27 shaunm joined #gluster
13:30 kotreshhr joined #gluster
13:41 ninjaryan joined #gluster
13:41 rastar joined #gluster
13:47 gem joined #gluster
13:47 skylar joined #gluster
13:54 arcolife joined #gluster
13:58 robb_nl joined #gluster
13:58 bluenemo joined #gluster
13:59 skoduri_ joined #gluster
13:59 hchiramm joined #gluster
14:12 ninjaryan joined #gluster
14:13 kpease joined #gluster
14:20 kramdoss_ joined #gluster
14:22 ninjaryan joined #gluster
14:26 BitByteNybble110 joined #gluster
14:27 ankitraj joined #gluster
14:31 TvL2386 joined #gluster
14:38 kdhananjay joined #gluster
14:47 robb_nl joined #gluster
14:47 mreamy joined #gluster
14:51 ieth0 joined #gluster
14:59 ninjarya1 joined #gluster
15:08 nbalacha joined #gluster
15:09 wushudoin joined #gluster
15:10 wushudoin joined #gluster
15:11 skoduri joined #gluster
15:26 ninjaryan joined #gluster
15:27 karnan joined #gluster
15:30 kpease joined #gluster
15:37 gem joined #gluster
15:37 ieth0 joined #gluster
15:44 jkroon joined #gluster
15:47 prth joined #gluster
16:01 jiffin joined #gluster
16:09 harish joined #gluster
16:13 ninjaryan joined #gluster
16:16 rafi joined #gluster
16:16 hackman joined #gluster
16:18 rwheeler joined #gluster
16:24 Gambit15 joined #gluster
16:30 malevolent joined #gluster
16:30 xavih joined #gluster
16:30 ieth0 joined #gluster
16:34 ninjaryan joined #gluster
16:39 shubhendu joined #gluster
16:39 RameshN joined #gluster
16:49 tom[] joined #gluster
16:55 ninjaryan joined #gluster
16:59 jri joined #gluster
16:59 hchiramm joined #gluster
17:02 msvbhat joined #gluster
17:04 bowhunter joined #gluster
17:06 rafi joined #gluster
17:06 kimmeh joined #gluster
17:14 kpease joined #gluster
17:15 ninjaryan joined #gluster
17:16 d-fence joined #gluster
17:16 d-fence_ joined #gluster
17:18 kpease joined #gluster
17:28 level7 joined #gluster
17:29 ahino joined #gluster
17:31 ieth0_ joined #gluster
17:36 ninjaryan joined #gluster
17:38 prth joined #gluster
17:39 hagarth joined #gluster
17:44 ninjaryan joined #gluster
17:49 plarsen joined #gluster
17:51 jiffin joined #gluster
17:52 BitByteNybble110 joined #gluster
17:53 [diablo] joined #gluster
18:01 mreamy joined #gluster
18:03 msvbhat joined #gluster
18:08 jiffin joined #gluster
18:12 bowhunter joined #gluster
18:15 kpease joined #gluster
18:15 mreamy joined #gluster
18:19 ninjaryan joined #gluster
18:23 plarsen joined #gluster
18:38 jiffin1 joined #gluster
18:39 karnan joined #gluster
18:40 ninjaryan joined #gluster
18:47 ten10 joined #gluster
18:48 ten10 so I stopped by the other day but missed the reply... if i have a gluster volume with 2 node replication... can I somehow use iscsi to failover to either of the nodes when 1 dies?
18:51 ieth0 joined #gluster
18:54 bluenemo joined #gluster
18:58 ninjaryan joined #gluster
19:04 ninjaryan joined #gluster
19:06 guhcampos joined #gluster
19:07 malevolent joined #gluster
19:07 xavih joined #gluster
19:09 ten10 seems like I need a floating IP to share the 2 nodes for iscsi
19:16 kpease joined #gluster
19:19 johnmilton joined #gluster
19:29 h4rry joined #gluster
19:36 johnmilton joined #gluster
19:58 rafi joined #gluster
20:13 karnan joined #gluster
20:20 JoeJulian ten10: We log the chat. See the /topic
20:24 ten10 ok, 1 person did eventually respond
20:24 JoeJulian Me
20:24 ten10 i believe multipath iscsi doesn't work when going across nodes
20:25 JoeJulian What kind of node?
20:25 ten10 sorry... linux iscsi target
20:29 JoeJulian So multipath iscsi doesn't work when going across iscsi targets? I thought that was the whole point of multipath.
20:30 JoeJulian multiple targets to reach the same block storage.
20:30 bowhunter joined #gluster
20:33 ten10 JoeJulian, yeah, maybe i'm doing something wrong and I'll have to verify when I can test some more time... basically it treated the 2 iscsi targets pointing to the same location as different datastores
20:41 JoeJulian And I freely admit I know almost nothing about iscsi. Everything I've read leads me to believe its too unstable for me to trust.
20:45 ten10 ah it definitely has its quirks... the thing is I'm running a vmware lab at home for testing only to learn.. trying to do some advanced things for fun but really none of this would be supported by vmware
20:45 kpease joined #gluster
20:45 ten10 I'd need to buy a nice netapp or emc for testing iscsi ;) MAYBE win2k12 iscsi implementation is supported, dont know
20:48 ten10 1 specific question i have regarding gluster tho.. say I have 2 different disk sizes that I make a directory on for a brick.. which volume size gets reported?
20:53 JoeJulian For a replica volume, it's random. Whichever responds first.
20:53 JoeJulian Typically you'll want replicas to be the same size.
20:54 JoeJulian Otherwise you'll end up with one brick getting full but the client still being able to write as the other brick will accept data. That leaves one copy stale and you might as well not have a replica.
20:56 ten10 ok, thanks.. yes, I'm going to be switching to a lvm lun rather than a directory. Both nodes will have the same size lun
20:56 ten10 that was just my initial testing/goofing around that I noticed
21:07 kimmeh joined #gluster
21:33 prth joined #gluster
21:40 cloph JoeJulian: maybe it was just coincidence, but for me the reported size was just of the smallest brick - or rather the free space reported did correspond to the min(remaining space on each brick) - was new to me that a client could actually see free disk space when there is none...
21:47 B21956 joined #gluster
22:21 kpease joined #gluster
23:29 jeremyh joined #gluster
23:38 mreamy joined #gluster
23:41 rafi joined #gluster
23:52 om joined #gluster
23:58 hagarth joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary