Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2017-04-06

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:13 kramdoss_ joined #gluster
01:30 arpu joined #gluster
01:37 shyam joined #gluster
01:46 Wizek_ joined #gluster
01:46 n0b0dyh3r3 joined #gluster
01:48 ilbot3 joined #gluster
01:48 Topic for #gluster is now Gluster Community - http://gluster.org | Documentation - https://gluster.readthedocs.io/en/latest/ | Patches - http://review.gluster.org/ | Developers go to #gluster-dev | Channel Logs - https://botbot.me/freenode/gluster/ & http://irclog.perlgeek.de/gluster/
01:50 derjohn_mobi joined #gluster
02:09 atinm joined #gluster
02:19 daMaestro joined #gluster
02:50 kramdoss_ joined #gluster
03:04 Gambit15 joined #gluster
03:11 aravindavk joined #gluster
03:13 prasanth joined #gluster
03:21 atinm joined #gluster
03:26 susant left #gluster
03:46 riyas joined #gluster
03:48 magrawal joined #gluster
03:51 atinm joined #gluster
04:00 itisravi joined #gluster
04:02 BuBU291 joined #gluster
04:04 _KaszpiR__ joined #gluster
04:06 nixpanic_ joined #gluster
04:06 nixpanic_ joined #gluster
04:06 krink_ joined #gluster
04:06 msvbhat joined #gluster
04:07 gem joined #gluster
04:18 DV joined #gluster
04:27 sanoj joined #gluster
04:28 ppai joined #gluster
04:29 skumar joined #gluster
04:36 Shu6h3ndu_ joined #gluster
04:40 susant joined #gluster
04:42 brayo joined #gluster
04:57 poornima joined #gluster
04:57 skumar joined #gluster
05:00 susant joined #gluster
05:00 susant left #gluster
05:01 nishanth joined #gluster
05:06 Philambdo joined #gluster
05:07 ashiq joined #gluster
05:07 apandey joined #gluster
05:09 sbulage joined #gluster
05:17 apandey_ joined #gluster
05:23 apandey__ joined #gluster
05:24 ankitr joined #gluster
05:27 shdeng joined #gluster
05:27 ndarshan joined #gluster
05:31 ankitr joined #gluster
05:35 karthik_us joined #gluster
05:38 Saravanakmr joined #gluster
05:38 nbalacha joined #gluster
05:42 R0ok_ joined #gluster
05:45 linux joined #gluster
05:45 hgowtham joined #gluster
05:47 linux I have one problem with gluster, after compiling gluster from source, I tried to start it with command systemctl start glusterd, but I'm getting message "Failed to start glusterd.service: Unit not found."
05:47 armyriad joined #gluster
05:47 linux I enable gluster to start with server, and than it's working, but when I stop it and try to start it, I'm getting same message again
05:48 linux I'm using CENTOS 7
05:50 rafi joined #gluster
05:54 ankitr joined #gluster
05:55 gyadav joined #gluster
05:55 jiffin joined #gluster
05:55 kotreshhr joined #gluster
05:55 Saravanakmr joined #gluster
05:58 apandey_ joined #gluster
05:58 prasanth joined #gluster
06:00 sona joined #gluster
06:04 apandey__ joined #gluster
06:08 apandey_ joined #gluster
06:10 delhage joined #gluster
06:11 mbukatov joined #gluster
06:19 linux Gluster source is downloaded from Gluster webpage
06:23 jiffin linux: u can download source from https://review.gluster.org/#/admin/projects/glusterfs
06:23 glusterbot Title: Gerrit Code Review (at review.gluster.org)
06:29 jtux joined #gluster
06:29 jtux left #gluster
06:36 msvbhat joined #gluster
06:44 bulde joined #gluster
06:46 linux You want to say that source from this link is have a problem with centos? https://download.gluster.org/pub/gluster/glusterfs/LATEST/
06:46 glusterbot Title: Index of /pub/gluster/glusterfs/LATEST (at download.gluster.org)
06:50 jkroon joined #gluster
06:52 derjohn_mob joined #gluster
07:00 jiffin linux: No, I thought you need a the git repo for glusterfs codebase
07:04 linux No, I'm always compiling programs from source on servers, but this is the first time that after all steps you cannot start gluster. I thing that this is a bug
07:05 jkroon_ joined #gluster
07:06 jiffin which version of gluster are u using?
07:07 linux 3.10.1
07:10 jiffin linux: did u check /var/log/glusterfs/glusterd.log ?
07:11 sanoj linux, jiffin is it that the /usr/lib/systemd/system/glusterd.service does not exist?
07:11 sanoj since it says unit not found
07:12 Prasad joined #gluster
07:14 msvbhat joined #gluster
07:15 linux There is no log file, because you cannot start glusterd
07:15 linux Only message: Failed to start glusterd.service: Unit not found.
07:16 apandey joined #gluster
07:17 linux sanoy, yes there is no glusterd.service, on that path
07:18 sanoj Can you run a find to check where the file glusterd.service has been installed (if it has) . else try a make install again..
07:18 sanoj linux, ^
07:18 linux here /usr/local/lib/systemd/system/glusterd.service
07:19 linux I do a install couple of times, but always getting glusterd.service on that location
07:21 linux I also added /usr/local/lib into /etc/ld.so.conf.d/
07:22 anoopcs linux, it is not required. systemd by default search in /usr/local/lib/systemd/system/
07:23 alezzandro joined #gluster
07:25 anoopcs linux, Can you please try after running systemctl daemon-reload
07:25 linux Ok, finished, still nothing
07:26 linux hmmm, ok, val log messages have a error now
07:26 linux Apr  6 10:25:36 Sb21 systemd: [/usr/local/lib/systemd/system/glusterd.service:12] Path '-${prefix}/etc/sysconfig/glusterd' is not absolute, ignoring.
07:26 Karan joined #gluster
07:28 anoopcs linux, Are you still getting the same error "Unit not found"? I hope not
07:28 linux yes
07:28 linux Still same error
07:29 linux who
07:29 linux who
07:30 anoopcs But this time it could detect the file. That's why you saw the other error in /var/log/messages.
07:30 humblec joined #gluster
07:30 anoopcs linux, What is your systemd version?
07:32 linux systemd-219-30
07:32 flying joined #gluster
07:33 anoopcs linux, Ok. One way to get around this issue is to manually replace line no:12 inside /usr/local/lib/systemd/system/glusterd.service to look like
07:34 anoopcs But still its just a warning and systemd ignores it and should continue
07:35 linux To look like?
07:37 anoopcs linux, Actually no need to replace. Because systemd ignores it.
07:37 kshlm linux, Looks like something went wrong with configure. It should have correctly replaced `${prefix}` in the glusterd unit file with the configure prefix.
07:37 kshlm linux, Please run `autogen.sh` and `configure` once more, and compile again.
07:38 ashiq joined #gluster
07:38 linux ok
07:39 kshlm linux, Also if you're using CentOS, why not just use the Storage-SIG packages? You only need to compile if you're interested in development.
07:40 anoopcs kshlm, It's a bug. I have seen it before
07:40 anoopcs But it should not affect in start up..
07:41 linux Earlier we have a problem with Storage-Sig, and till than we are using source
07:41 linux ok. configure
07:41 linux https://pastebin.com/SMfDvDvt
07:41 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
07:44 linux after recompile again same error
07:44 jiffin linux: can u check the following file /usr/local/etc/glusterfs/glusterd.vol
07:44 jiffin ?
07:45 linux http://paste.ubuntu.com/24326263/
07:45 glusterbot Title: Ubuntu Pastebin (at paste.ubuntu.com)
07:45 jiffin linux: looks good
07:46 anoopcs linux, What does systemctl status glusterd says?
07:48 linux Just a second I restarted server
07:52 linux After restart gluster is working and this is status
07:52 linux https://paste.fedoraproject.org/paste/u190qS7HRQAw95EH3XMla15M1UNdIGYhyRLivL9gydE=/raw
07:52 linux now I will stop, and try to start gluster again
07:53 linux Not working status;
07:53 linux https://paste.fedoraproject.org/paste/y6pzx0wM7IiK35uT8YU0dF5M1UNdIGYhyRLivL9gydE=/raw
07:55 ivan_rossi joined #gluster
08:06 ivan_rossi left #gluster
08:16 ashiq joined #gluster
08:32 nobody481 joined #gluster
08:33 linux Any idea?
08:33 Philambdo joined #gluster
08:41 buvanesh_kumar joined #gluster
08:52 msvbhat joined #gluster
08:52 Humble joined #gluster
08:52 anoopcs linux, Check in glusterd logs
08:58 linux There is no gluster logs, because you cannot start gluster
09:01 Prasad_ joined #gluster
09:02 kshlm linux, `journalctl -b -u glusterd.service` what does this give?
09:03 aardbolreiziger joined #gluster
09:06 linux https://paste.fedoraproject.org/paste/xKF~0LHaWvwwBTC4vncSrl5M1UNdIGYhyRLivL9gydE=/raw
09:07 kshlm The logs say that glusterd actually started.
09:08 linux ups, sorry i restarted server
09:08 linux just a second
09:09 linux https://paste.fedoraproject.org/paste/YxBkZF3-WQRYpMK-Y7fLPV5M1UNdIGYhyRLivL9gydE=/raw
09:11 Wizek_ joined #gluster
09:16 mallorn joined #gluster
09:17 kshlm linux, Not much info there either. glusterd seems to be exiting immediately.
09:18 kshlm Could you just run `glusterd` directly and check if it starts.
09:18 linux yes, it starting
09:18 Prasad__ joined #gluster
09:20 linux And it's automatically start when you reset server
09:20 absolutejam joined #gluster
09:23 p7mo joined #gluster
09:27 loadtheacc joined #gluster
09:37 kdhananjay joined #gluster
09:38 loadtheacc joined #gluster
09:38 kshlm linux, Looks like whatever problem you had fixed itself.
09:41 linux It's not fixed, every time if you want to start gluster you need to restart server
09:42 linux like this some volume didnt start automaticly
09:51 kramdoss_ joined #gluster
09:53 prasanth joined #gluster
09:53 absolutejam Did my messages earlier go through?
09:54 absolutejam Had some issues with connection.
10:02 Prasad_ joined #gluster
10:06 msvbhat joined #gluster
10:08 hgowtham joined #gluster
10:14 Prasad__ joined #gluster
10:29 apandey joined #gluster
10:37 buvanesh_kumar joined #gluster
10:50 hgowtham joined #gluster
11:01 om2 joined #gluster
11:11 rafi absolutejam: I don't think so
11:15 absolutejam Just curious if something like gluster can support 2 nodes accessing the same directly attached storage
11:15 absolutejam only 1 node will be using a folder in the root of the drive at any time
11:27 msvbhat joined #gluster
11:28 atinm joined #gluster
11:33 Abazigal joined #gluster
11:35 [fre] joined #gluster
11:36 Abazigal Hi guys, I'm testing gluster (for enterprise usage), and I'm wondering how safe it is to create/delete files directly in the bricks
11:36 [fre] Hi Guys. I do have a little issue here...
11:36 [fre] for maintenance, I had to kill one of a 2node-gluster-cluster.
11:37 [fre] switched over all the vips to the running node. but after killing the maintenance-node, the "good" one went down too.
11:37 [fre] makes sense, somehow.
11:37 Abazigal basically, our usage is to store a lot of file in time sharded folder; so we need to create periodicly a new folder and its structure, and to remove the oldest folder when storage is full
11:38 Abazigal problem is that from a client (using fuse), creation of folder takes 10m and deletion ~25m
11:38 jiffin absolutejam: if u are using different directory then it is okay IMO
11:39 jiffin Abazigal: it is not recommend to create files/directories ddirectly on the brick
11:39 Abazigal but if I create the folder structure on each brick on the server, it tooks seconds; and it seems OK on client side (my directory is created/deleted)
11:39 jiffin every glusterfs assigns gfid to file/directories inside a volume
11:40 [fre] but, all my volumes seem to remain offline now, with the nfs-service I need them for.
11:41 [fre] might be a silly question, but, do I need to detach the maintenance node from it? how about the resync of my volumes afterwards if I do so?
11:41 Abazigal jiffin: would it be a problem even for a delete-only usecase ? (just remove 1 folder containing old datas)
11:41 jiffin if create files/directories directly on brick there is high probability of getting different gfids for  directories and also disrupts distribution logic as well
11:42 jiffin Abazigal: entire distribution logic resides at client side
11:43 jiffin and there will hardlinks created for each file in a hidden directory .glusterfs (with gfid as name)
11:44 jiffin so if u are using only distributed volume and plus if can delete those hardlinks from .glusterfs directory then it is okay IMO
11:45 absolutejam jiffin using what sorry?
11:45 Abazigal jiffin: ok; thank you for your time :) I'll dig this  (I can afford to spend 10m creating new folder, but 30 to delete old ones is really too long for us)
11:48 jiffin absolutejam: do you mean to say "Is it possible to create directories/folders on same storage for different volumes? "
11:56 nbalacha joined #gluster
11:57 nh2 joined #gluster
11:58 absolutejam I was originally looking at a way to provide 2 hosts with the ability to safely access the same shared volume (A shared disk via. the hypervisor)
11:59 absolutejam I thought i could just use gfs2 but once I started looking into it, it involved setting up a whole cluster
11:59 absolutejam It's for docker data
11:59 absolutejam Docker container running on Node A, looking at folder on shared drive
12:00 absolutejam Node A dies, node B spins up the container and looks at the same folder on the shared drive to carry on
12:00 absolutejam I guess the reason I'd have to set up the clustering is because of a split brain scenario though...
12:00 BuBU291 left #gluster
12:12 pjrebollo joined #gluster
12:13 pjrebollo Does anyone know what can cause this problem geo-replication log
12:15 pjrebollo [2017-04-06 12:07:14.081565] E [fuse-bridge.c:3435:fuse_xattr_cbk] 0-glusterfs-fuse: extended attribute not supported by the backend storage
12:17 subscope joined #gluster
12:19 jiffin pjrebollo: seems like it complains bricks does not support extended attribute
12:20 pjrebollo I missed something in the configuration process?
12:20 pjrebollo Both glusters are running CentOS 7.3 and Gluster 3.8.9.
12:21 jiffin pjrebollo: can u check the bricks logs for clues
12:21 jiffin pjrebollo: ur backend storage supports extended attributes , right?
12:22 pjrebollo Let me double check...
12:25 pjrebollo I'm able to do a "setfattr" on the slave volume without problem.
12:26 pjrebollo That is enough or you suggest to perform anything else to validate extended attributes?
12:27 nh2 joined #gluster
12:29 jiffin pjrebollo: I hope so
12:32 kpease joined #gluster
12:39 jiffin1 joined #gluster
12:47 jiffin1 joined #gluster
12:54 msvbhat joined #gluster
12:55 shyam joined #gluster
13:05 gyadav joined #gluster
13:06 nbalacha joined #gluster
13:08 Prasad_ joined #gluster
13:23 Prasad__ joined #gluster
13:29 msvbhat joined #gluster
13:35 skylar joined #gluster
13:35 ira joined #gluster
13:37 rwheeler joined #gluster
13:37 plarsen joined #gluster
13:39 Shu6h3ndu_ joined #gluster
13:43 squizzi joined #gluster
13:46 prasanth joined #gluster
13:47 apandey_ joined #gluster
14:03 nbalacha joined #gluster
14:07 atm0sphere joined #gluster
14:11 icey_ joined #gluster
14:11 gyadav joined #gluster
14:11 icey joined #gluster
14:14 shyam joined #gluster
14:15 bitchecker joined #gluster
14:17 PotatoGim joined #gluster
14:31 gyadav_ joined #gluster
14:32 sbulage joined #gluster
14:40 nbalacha joined #gluster
14:42 kramdoss_ joined #gluster
14:48 chawlanikhil24 joined #gluster
14:55 farhorizon joined #gluster
15:03 susant joined #gluster
15:12 kotreshhr left #gluster
15:16 krink joined #gluster
15:18 rafi1 joined #gluster
15:27 jdarcy joined #gluster
15:28 msvbhat joined #gluster
15:31 riyas joined #gluster
15:39 rafi joined #gluster
15:41 Abazigal me again :D I'm testing fault tolerance by shuting down a host involved in a 3 nodes distributed volume (no repl)
15:42 Abazigal on my client, after some seconds of hang, "ls" returns all my files minus a thir; normal
15:43 Abazigal but then, if I try to write things, I have a lot of IOError for specific file names
15:43 Abazigal does it mean that the hash attribution is not recalculated when a host goes down ?
15:47 susant joined #gluster
15:49 jiffin joined #gluster
15:50 gem joined #gluster
15:51 vbellur joined #gluster
15:52 atm0sphere joined #gluster
15:52 vbellur joined #gluster
15:53 vbellur joined #gluster
16:06 chawlanikhil24 joined #gluster
16:10 susant joined #gluster
16:10 derjohn_mob joined #gluster
16:27 ic0n joined #gluster
16:29 susant joined #gluster
16:29 Gambit15 joined #gluster
16:31 Chewi joined #gluster
16:33 Chewi hello. I'm using geo-replication, which is working fine, and then mounting the slave volume on another system using FUSE. during some internal scans, the FUSE mount crapped out and despite the log saying "Client process will keep trying to connect to glusterd until brick's port is available", it then shut down the mount and it was still dead a week later. I can't find any info about making it retry. is that possible?
16:34 ankitr joined #gluster
16:37 gyadav_ joined #gluster
16:40 rafi joined #gluster
16:44 Chewi I had hoped that systemd might provide a solution but sadly not
16:46 jiffin joined #gluster
16:50 purpleidea joined #gluster
16:50 purpleidea joined #gluster
16:54 vbellur joined #gluster
16:55 vbellur joined #gluster
16:55 vbellur joined #gluster
16:56 vbellur joined #gluster
16:56 overclk joined #gluster
16:57 vbellur joined #gluster
16:57 vbellur joined #gluster
16:59 msvbhat joined #gluster
16:59 vbellur joined #gluster
17:05 saduser joined #gluster
17:20 Saravanakmr joined #gluster
17:30 ankitr joined #gluster
17:38 kpease joined #gluster
17:47 derjohn_mob joined #gluster
17:51 jwd joined #gluster
18:03 Saravanakmr joined #gluster
18:05 JoeJulian Abazigal: Actually it means that the hash calculated to the missing brick. All files created on the missing brick will error and all directory creation will error.
18:05 JoeJulian Chewi: You're right, it should try to redo the mount. Please file a bug report.
18:05 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
18:05 Chewi JoeJulian: ok
18:06 cholcombe joined #gluster
18:13 vbellur joined #gluster
18:21 rafi joined #gluster
18:46 shyam joined #gluster
18:52 krink_ joined #gluster
18:55 gyadav_ joined #gluster
18:57 prasanth joined #gluster
19:00 vbellur joined #gluster
19:06 armyriad joined #gluster
19:14 absolutejam joined #gluster
19:19 plarsen joined #gluster
19:28 vbellur joined #gluster
19:40 jwd joined #gluster
19:42 Chewi JoeJulian: was going to file a bug but I think it may be this one https://bugzilla.redhat.com/show_bug.cgi?id=1422781
19:42 glusterbot Bug 1422781: high, medium, ---, jeff, CLOSED CURRENTRELEASE, Transport endpoint not connected error seen on client when glusterd is restarted
19:42 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
19:42 Chewi JoeJulian: we haven't updated to 3.10.1 yet
19:43 Chewi JoeJulian: we didn't restart glusterd but it may have had the same effect
19:50 msvbhat joined #gluster
19:53 derjohn_mob joined #gluster
20:08 squizzi joined #gluster
20:20 buvanesh_kumar joined #gluster
20:21 zerick joined #gluster
20:22 JoeJulian +
20:22 JoeJulian 1
20:22 JoeJulian stupid keyboard
20:28 shyam joined #gluster
20:33 arpu joined #gluster
20:35 glustin nh2, JoeJulian: re a couple days ago: all "point" releases are "stable"
20:35 glustin it's just a matter of how long they are maintained
20:37 JoeJulian yeah, I'm aware of that theory. :D
20:37 JoeJulian But I've never seen a .0 release that's truly stable.
20:38 glustin I don't think I've seen a .anything release that's truly stable. ;)
20:38 JoeJulian touche
20:48 Karan joined #gluster
20:54 vbellur joined #gluster
20:57 vbellur joined #gluster
21:00 vbellur joined #gluster
21:02 vbellur joined #gluster
21:10 farhorizon joined #gluster
21:18 farhorizon joined #gluster
21:39 loadtheacc joined #gluster
21:48 farhorizon joined #gluster
22:11 loadtheacc joined #gluster
22:47 krink joined #gluster
22:57 krink joined #gluster
22:58 serg_k joined #gluster
23:14 plarsen joined #gluster
23:57 MrAbaddon joined #gluster
23:59 krink joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary