Perl 6 - the future is here, just unevenly distributed

IRC log for #gluster, 2016-02-02

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary

All times shown according to UTC.

Time Nick Message
00:01 haomaiw__ joined #gluster
00:11 tswartz joined #gluster
00:16 twaddle joined #gluster
00:18 twaddle joined #gluster
00:18 kenansulayman joined #gluster
00:20 Dasiel joined #gluster
00:20 cyberbootje joined #gluster
00:38 nathwill joined #gluster
00:44 atinm joined #gluster
00:50 liewegas joined #gluster
00:51 theron joined #gluster
01:01 haomaiwa_ joined #gluster
01:03 caitnop joined #gluster
01:11 atrius joined #gluster
01:21 nangthang joined #gluster
01:28 Lee1092 joined #gluster
01:34 moss joined #gluster
01:43 zhangjn joined #gluster
01:48 atrius joined #gluster
01:49 julim joined #gluster
01:51 plarsen joined #gluster
01:54 atinm joined #gluster
01:55 nhayashi joined #gluster
01:55 haomaiwa_ joined #gluster
02:01 haomaiwa_ joined #gluster
02:07 dusmant joined #gluster
02:18 calavera joined #gluster
02:20 theron joined #gluster
02:30 nickage__ joined #gluster
02:45 theron joined #gluster
02:45 harish joined #gluster
02:45 haomaiwang joined #gluster
02:49 gildub joined #gluster
02:54 zhangjn joined #gluster
03:01 haomaiwa_ joined #gluster
03:05 rafi joined #gluster
03:13 haomaiwa_ joined #gluster
03:16 ovaistariq joined #gluster
03:31 calisto joined #gluster
03:34 gem joined #gluster
03:42 shubhendu joined #gluster
03:43 nishanth joined #gluster
03:46 haomaiwa_ joined #gluster
03:51 EinstCra_ joined #gluster
03:52 kanagaraj joined #gluster
03:56 kdhananjay joined #gluster
04:01 haomaiwa_ joined #gluster
04:02 ramteid joined #gluster
04:06 itisravi joined #gluster
04:06 haomai___ joined #gluster
04:07 itisravi joined #gluster
04:08 nbalacha joined #gluster
04:12 calavera joined #gluster
04:26 vmallika joined #gluster
04:26 sakshi joined #gluster
04:28 jiffin joined #gluster
04:29 theron joined #gluster
04:32 atinm joined #gluster
04:44 nehar joined #gluster
04:47 zhangjn joined #gluster
04:49 nishanth joined #gluster
04:49 baojg joined #gluster
04:51 shubhendu joined #gluster
04:54 Manikandan joined #gluster
04:57 JesperA joined #gluster
04:58 pppp joined #gluster
04:59 aravindavk joined #gluster
05:00 nbalacha joined #gluster
05:01 haomaiwa_ joined #gluster
05:05 dusmant joined #gluster
05:11 ahino joined #gluster
05:14 Bhaskarakiran joined #gluster
05:16 rafi joined #gluster
05:26 ramky joined #gluster
05:30 baojg joined #gluster
05:34 kotreshhr joined #gluster
05:37 hgowtham joined #gluster
05:37 ramky joined #gluster
05:40 ashiq joined #gluster
05:45 ovaistariq joined #gluster
05:46 ovaistar_ joined #gluster
05:48 vmallika joined #gluster
05:56 nishanth joined #gluster
05:59 kdhananjay joined #gluster
06:01 haomaiwang joined #gluster
06:03 ndarshan joined #gluster
06:08 vimal joined #gluster
06:10 karnan joined #gluster
06:11 kovshenin joined #gluster
06:16 Saravanakmr joined #gluster
06:24 Bhaskarakiran joined #gluster
06:29 atalur joined #gluster
06:45 Apeksha joined #gluster
06:48 gem joined #gluster
06:55 gem joined #gluster
07:01 haomaiwa_ joined #gluster
07:02 SOLDIERz joined #gluster
07:04 dusmant joined #gluster
07:08 mhulsman joined #gluster
07:09 harish joined #gluster
07:19 jtux joined #gluster
07:25 robb_nl joined #gluster
07:29 RameshN joined #gluster
07:34 jtux joined #gluster
07:37 tswartz joined #gluster
07:38 [Enrico] joined #gluster
07:40 ovaistariq joined #gluster
07:44 Bhaskarakiran joined #gluster
07:48 om joined #gluster
07:50 shubhendu joined #gluster
08:01 haomaiwa_ joined #gluster
08:05 David_Varghese joined #gluster
08:06 David_Varghese hi, i was wondering why suddenly i did not receive notification?
08:07 shaunm joined #gluster
08:08 David_Varghese sorry wrong channel
08:11 Dasiel joined #gluster
08:13 masterzen joined #gluster
08:18 Dasiel joined #gluster
08:20 fsimonce joined #gluster
08:22 kovshenin joined #gluster
08:31 arcolife joined #gluster
08:40 ahino joined #gluster
08:41 ovaistariq joined #gluster
08:48 skoduri joined #gluster
08:53 kshlm joined #gluster
08:53 karnan joined #gluster
08:54 shaunm joined #gluster
08:59 ctria joined #gluster
09:01 haomaiwang joined #gluster
09:04 zhangjn joined #gluster
09:06 kshlm joined #gluster
09:08 [diablo] joined #gluster
09:09 zhangjn joined #gluster
09:09 Dasiel joined #gluster
09:21 theron joined #gluster
09:28 EinstCrazy joined #gluster
09:30 drankis joined #gluster
09:30 arcolife joined #gluster
09:32 Dasiel joined #gluster
09:35 masterzen joined #gluster
09:40 markd_ joined #gluster
09:41 gem_ joined #gluster
09:41 mdavidson joined #gluster
09:44 Slashman joined #gluster
09:45 baojg joined #gluster
09:45 ppai joined #gluster
09:47 sakshi joined #gluster
09:47 Slashman joined #gluster
09:49 Humble joined #gluster
09:55 Dasiel joined #gluster
09:57 nangthang joined #gluster
10:02 haomaiwa_ joined #gluster
10:03 RameshN joined #gluster
10:06 aravindavk joined #gluster
10:11 zhangjn joined #gluster
10:12 rwheeler joined #gluster
10:12 Humble joined #gluster
10:29 masterzen joined #gluster
10:30 ovaistariq joined #gluster
10:31 Dasiel joined #gluster
10:35 aravindavk joined #gluster
10:35 dusmant joined #gluster
10:36 [diablo] Good morning #gluster ... guys I'm testing this how-to http://www.gluster.org/community/documentation/index.php/Getting_started_configure
10:36 [diablo] but when I create my volume I get>
10:37 [diablo] root@gfs1:~# gluster volume create gv0 replica 2 gfs1:/export/sdb1/brick gfs2:/export/sdb1/brick
10:37 [diablo] volume create: gv0: failed: /export/sdb1/brick or a prefix of it is already part of a volume
10:37 glusterbot [diablo]: To clear that error, follow the instructions at http://joejulian.name/blog/glusterfs-path-or-a-prefix-of-it-is-already-part-of-a-volume/ or see this bug https://bugzilla.redhat.com/show_bug.cgi?id=877522
10:37 post-factum :D
10:37 post-factum I like that bot
10:37 [diablo] HAHA
10:37 [diablo] sweetness
10:37 post-factum very much. +1 for that guy who invented it
10:38 fedele Good morning #gluster.. I'm working on my gluster cluster 32 nodes
10:39 post-factum @glusterbot++
10:39 glusterbot post-factum: glusterbot's karma is now 8
10:39 fedele I'm able to add 19 bricks but at 20 I receive a Request Time out.....
10:40 masterzen joined #gluster
10:40 post-factum fedele: present your configuration and volume info via pastebin
10:42 EinstCrazy joined #gluster
10:42 fedele post-factum, can you help how to use pastebin?
10:49 fedele OK, solved with pastebin
10:49 fedele This is the volume info link: http://pastebin.com/2WYBWmRk
10:49 glusterbot Please use http://fpaste.org or http://paste.ubuntu.com/ . pb has too many ads. Say @paste in channel for info about paste utils.
10:50 fedele ok I will resend the informations
10:51 fedele This is the paste for volume info: http://ur1.ca/oh0kn
10:51 glusterbot Title: #317421 Fedora Project Pastebin (at ur1.ca)
10:54 fedele and this is my configuration:  http://ur1.ca/oh0l1
10:54 glusterbot Title: #317424 Fedora Project Pastebin (at ur1.ca)
10:55 fedele @paste
10:55 glusterbot fedele: For a simple way to paste output, install netcat (if it's not already) and pipe your output like: | nc termbin.com 9999
10:57 EinstCrazy joined #gluster
10:57 fedele glusterboot I don't understand
10:58 zhangjn joined #gluster
10:59 zhangjn joined #gluster
11:00 zhangjn joined #gluster
11:01 fedele http://ur1.ca/oh0kn | nc termbin.com 9999
11:01 glusterbot Title: #317421 Fedora Project Pastebin (at ur1.ca)
11:01 haomaiwa_ joined #gluster
11:02 [diablo] hmmm when connecting via fuse client http://pastie.org/pastes/10705420/text?key=bq2b5fb4g4jqqx27qipsdg
11:02 glusterbot Title: Private Paste - Pastie (at pastie.org)
11:02 * [diablo] heads for a smoke to destress
11:06 zhangjn joined #gluster
11:06 Dasiel joined #gluster
11:07 fedele glusterboot: thank you for the help whit pastebin... it is the first time I use!!
11:09 aravindavk joined #gluster
11:10 Debloper joined #gluster
11:13 zhangjn joined #gluster
11:14 zhangjn joined #gluster
11:15 zhangjn joined #gluster
11:15 dusmant joined #gluster
11:16 zhangjn joined #gluster
11:17 dusmant joined #gluster
11:17 zhangjn joined #gluster
11:19 zhangjn joined #gluster
11:20 zhangjn joined #gluster
11:21 post-factum fedele: show us please output of "gluster peer status" command (via pastebin as well)
11:21 zhangjn joined #gluster
11:24 zhangjn joined #gluster
11:25 zhangjn joined #gluster
11:25 * [diablo] wonders why fuse client bugs out
11:25 [diablo] NFS works
11:26 luizcpg joined #gluster
11:28 fedele any suggestions for my problem?
11:31 kovshenin joined #gluster
11:32 [diablo] ah my desktop uses 3.7 and backend servers are on 3.4 ... guess it's not compatible
11:35 kotreshhr left #gluster
11:42 luizcpg joined #gluster
11:42 spalai joined #gluster
11:52 Bhaskarakiran joined #gluster
11:57 Manikandan REMINDER: Gluster Community Bug Triage meeting (Today) in about 3 minutes at #gluster-meeting
12:01 haomaiwa_ joined #gluster
12:03 fedele #gluster-meeting
12:04 ekuric joined #gluster
12:05 kanagaraj joined #gluster
12:08 Manikandan [CANCELED] Gluster Community Bug Triage meeting for today* because of less number of participants
12:08 rafi fedele: glusterd logs ?
12:12 calisto joined #gluster
12:17 karnan joined #gluster
12:18 ovaistariq joined #gluster
12:23 ahino joined #gluster
12:23 kshlm joined #gluster
12:26 sakshi joined #gluster
12:26 sakshi joined #gluster
12:28 bluenemo joined #gluster
12:29 luizcpg joined #gluster
12:29 kanagaraj joined #gluster
12:30 bluenemo is there any limit to the RAM gluster will take when running three servers in replication? I can throw RAM at it with no limit - its not that its slow or anything, its just that its always swapping a bit and RAM is always full. Scaling from 22GB to 28 per machine now to see what happens
12:33 om2 joined #gluster
12:34 julim_ joined #gluster
12:34 kkeithley1 joined #gluster
12:35 martinet1 joined #gluster
12:35 aea_ joined #gluster
12:35 dusmantkp_ joined #gluster
12:37 dblack joined #gluster
12:45 ira joined #gluster
12:50 calisto joined #gluster
13:00 kshlm joined #gluster
13:01 haomaiwang joined #gluster
13:03 gowtham joined #gluster
13:04 nbalacha joined #gluster
13:07 kkeithley1 joined #gluster
13:07 kanagaraj joined #gluster
13:09 primusinterpares joined #gluster
13:11 EinstCrazy joined #gluster
13:14 arcolife joined #gluster
13:14 aravindavk joined #gluster
13:15 shubhendu joined #gluster
13:15 nishanth joined #gluster
13:16 dusmantkp_ joined #gluster
13:17 karnan joined #gluster
13:17 post-factum joined #gluster
13:19 sakshi joined #gluster
13:22 unclemarc joined #gluster
13:24 Dasiel joined #gluster
13:24 masterzen joined #gluster
13:37 Dasiel left #gluster
13:37 Dasiel joined #gluster
13:39 masterzen joined #gluster
13:49 ro joined #gluster
13:50 haomaiwa_ joined #gluster
13:53 [diablo] OK got it working... tested FUSE via a 14.04 ... worked
13:55 baoboa joined #gluster
13:57 ro Hey guys - we have a fairly large cluster that has been running without issue for about 3 months. It seems we had some kind of network hiccup, some nodes are fine, some are really slow (example: ls takes forever), some nodes don't respond at all. On the unreponsive nodes I tried unmounting and remounting but the remount just hangs. Any pointers on what might work/where I might try looking?
13:58 ahino1 joined #gluster
14:01 haomaiwa_ joined #gluster
14:06 vmallika joined #gluster
14:07 ovaistariq joined #gluster
14:13 robb_nl_ joined #gluster
14:14 drankis joined #gluster
14:26 nangthang joined #gluster
14:27 bowhunter joined #gluster
14:28 b0p joined #gluster
14:32 dusmantkp_ joined #gluster
14:32 tru_tru joined #gluster
14:33 shubhendu joined #gluster
14:33 rwheeler joined #gluster
14:36 raghu joined #gluster
14:37 dblack joined #gluster
14:38 harold joined #gluster
14:46 ahino joined #gluster
14:49 skylar joined #gluster
14:50 plarsen joined #gluster
14:51 Humble joined #gluster
14:57 robb_nl_ joined #gluster
14:57 coredump joined #gluster
15:01 haomaiwang joined #gluster
15:04 kshlm joined #gluster
15:05 Peppard joined #gluster
15:07 skylar joined #gluster
15:14 theron joined #gluster
15:29 EinstCrazy joined #gluster
15:30 farhoriz_ joined #gluster
15:31 David_Varghese joined #gluster
15:35 ctria joined #gluster
15:36 robb_nl_ joined #gluster
15:38 calisto joined #gluster
15:39 ovaistariq joined #gluster
15:45 unclemarc joined #gluster
15:46 ovaistariq joined #gluster
15:48 nbalacha joined #gluster
16:01 64MAAY42H joined #gluster
16:01 farhoriz_ joined #gluster
16:03 neofob joined #gluster
16:07 bennyturns joined #gluster
16:10 kshlm joined #gluster
16:16 B21956 joined #gluster
16:18 skoduri joined #gluster
16:22 kshlm joined #gluster
16:26 nickage__ joined #gluster
16:32 plarsen joined #gluster
16:33 wushudoin joined #gluster
16:35 arcolife joined #gluster
16:40 Wojtek joined #gluster
16:44 jiffin joined #gluster
16:45 Wojtek Hello All. We're seeing a behavior in Gluster 3.7.x that we did not see in 3.4.x and we're not sure how to fix it. When multiple processes are attempting to rename a file to the same destination at once, we're now seeing "Device or resource busy" and "Stale file handle" errors. Here's the command to replicate it: cd /mnt/glustermount; while true; do FILE=$RANDOM; touch $FILE; mv $FILE file
16:45 Wojtek -fv; done
16:49 Wojtek The above command would be ran on two or three servers within the same gluster cluster. In the output, one would always be sucessfull in the rename, while the 2 other ones would fail with the above errors
16:50 mobaer joined #gluster
16:50 Wojtek Doing the same in 3.4.x results in all renames being done correctly with no errors
16:53 gem joined #gluster
16:57 inodb joined #gluster
17:00 calavera joined #gluster
17:01 haomaiwang joined #gluster
17:04 farhoriz_ joined #gluster
17:05 Slashman joined #gluster
17:06 mobaer joined #gluster
17:08 Bhaskarakiran joined #gluster
17:10 rafi joined #gluster
17:11 hagarth Wojtek: what is your use case for concurrent multiple renames?
17:14 rafi joined #gluster
17:16 drankis joined #gluster
17:19 gem joined #gluster
17:21 Rapture joined #gluster
17:23 virusuy hi guys, i have a 4 nodes distributed replicated (2x2)
17:23 virusuy i copy 4 files, and I thought they'll be distributed (2 files per brick)
17:24 inodb joined #gluster
17:24 virusuy but instead, those 4 files are available only in 2 nodes (those who replicated each others) and in the other two, i see only 2 files and empty
17:24 virusuy it's that normal ?
17:31 jiffin virusuy: i didn't get ur last point , " the other two i see only 2 files and empty" , so are there 6 files?
17:33 EinstCrazy joined #gluster
17:33 virusuy jiffin: sorry, no, they're 4, let's say: file 1, file 2, file 3 and file 4
17:34 virusuy node 1 (and node 2 - replica) and node3 (node 4 - replica)
17:34 virusuy in node 1 and 2 i see those 4 file
17:34 virusuy but in node 3 and 4, i only sie file 1 and file 2 and both are empty
17:37 jiffin did u copy directly to the node(server) ?
17:40 virusuy jiffin: no
17:41 inodb joined #gluster
17:46 jiffin joined #gluster
17:47 jiffin virusuy: are u still there?
17:53 dthrvr joined #gluster
17:58 aea joined #gluster
18:01 nottc joined #gluster
18:01 haomaiwa_ joined #gluster
18:03 virusuy jiffin: still here
18:05 virusuy jiffin: i tried the same thing again, and now worked OK
18:05 virusuy this is weird
18:06 jiffin virusuy: did u copy directly to node(server/bricks) instead of using glusterfs-client??
18:09 Wojtek hagarth: we generate files and push them to the gluster cluster. Some are generated multiple times and end up being pushed to the cluster at the same time by different data generators; resulting in the 'rename collision'. We use also the cluster.extra-hash-regex to make sure the data is written in place. And this does the rename.
18:15 plarsen joined #gluster
18:18 wushudoin joined #gluster
18:20 ovaistariq joined #gluster
18:24 hagarth Wojtek: will let some of our dht developers know about this. In 3.6, a change was introduced through which if a lock is failed to be acquired on the source inode, EBUSY is returned to the application.
18:25 JoeJulian @split-brain
18:25 glusterbot JoeJulian: To heal split-brains, see https://gluster.readthedocs.org/en/release-3.7.0/Features/heal-info-and-split-brain-resolution/ For additional information, see this older article https://joejulian.name/blog/fixing-split-brain-with-glusterfs-33/ Also see splitmount https://joejulian.name/blog/glusterfs-split-brain-recovery-made-easy/
18:26 hagarth Wojtek: the locking is attempted as part of a rename transaction
18:29 nickage__ joined #gluster
18:38 theron joined #gluster
18:40 MACscr|lappy joined #gluster
18:42 ovaistariq joined #gluster
18:44 kovshenin joined #gluster
18:51 ira joined #gluster
18:53 JoeJulian <grumble>Had a brick stop responding. Nothing in the logs. Killed it and force started the volume and now my heals have started over at the beginning of my stupid 20TB images. These damned things are never going to finish getting healed.
18:54 MACscr|lappy joined #gluster
18:56 samppah JoeJulian: *ouch* pretty huge images
18:57 inodb joined #gluster
19:03 Wojtek hagarth: Thanks. Is the EBUSY the new intended behavior, or it's a bug?
19:03 coredump|br joined #gluster
19:04 Akee1 joined #gluster
19:06 malevolent_ joined #gluster
19:07 yalu_ joined #gluster
19:07 lanning_ joined #gluster
19:07 lalatend1M joined #gluster
19:08 obnox_ joined #gluster
19:08 msvbhat_ joined #gluster
19:10 devilspgd_ joined #gluster
19:10 rossdm_ joined #gluster
19:11 xMopxShe- joined #gluster
19:11 nhayashi_ joined #gluster
19:11 CyrilPepL joined #gluster
19:11 p8952_ joined #gluster
19:12 klaxa joined #gluster
19:12 armyriad joined #gluster
19:12 cliluw joined #gluster
19:12 atrius` joined #gluster
19:12 fsimonce joined #gluster
19:13 bluenemo joined #gluster
19:13 Akee joined #gluster
19:15 Slashman joined #gluster
19:15 kbyrne joined #gluster
19:16 csaba joined #gluster
19:16 coreping_ joined #gluster
19:19 plarsen joined #gluster
19:29 skylar joined #gluster
19:30 bennyturns joined #gluster
19:31 ovaistariq joined #gluster
19:32 ovaistariq joined #gluster
19:41 haomaiwang joined #gluster
19:43 EinstCrazy joined #gluster
19:45 nottc joined #gluster
19:46 raghu joined #gluster
19:48 yalu joined #gluster
19:56 ahino joined #gluster
19:56 ovaistariq joined #gluster
19:57 kshlm joined #gluster
20:01 ovaistar_ joined #gluster
20:03 inodb joined #gluster
20:28 mhulsman joined #gluster
20:34 cyberbootje joined #gluster
20:35 robb_nl_ joined #gluster
20:39 calavera joined #gluster
20:39 hagarth Wojtek: the behavior is debatable. I would prefer it to behave like before. Cannot think of a valid technical reason to be not doing that. will need to check with dht developers though.
20:41 bmikhael joined #gluster
21:12 mobaer joined #gluster
21:17 ctria joined #gluster
21:18 theron joined #gluster
21:31 nottc joined #gluster
21:32 MACscr ok, my gluster is screwed up. i have a two node cluster and im getting different file sizes on each mount. i dont get it as nothing is in heal mode
21:35 raghu MACscr: can you please tell what type of volume are u using?
21:35 MACscr raghu: type?
21:35 raghu MACscr: replicate? or pure distribute? or distributed replicate?
21:36 MACscr hmm, i honestly have no idea. guess i need to check the configs. but its a pretty stock setup
21:36 raghu can you please give the o/p of "gluster volume info <volume name>"?
21:37 MACscr http://paste.debian.net/plain/378224
21:37 MACscr a little extra info too
21:38 raghu MACscr: OK, so you are using a replicate volume with replica count 2. You said you are getting different file sizes on each mount. How many mounts you have?
21:39 raghu And are they fuse or NFS?
21:39 raghu please give me the o/p of mount command. That answers most part of my previous question
21:40 Larsen joined #gluster
21:41 Larsen_ joined #gluster
21:41 MACscr i combined everything for future reference http://paste.debian.net/plain/378225
21:42 bennyturns joined #gluster
21:44 haomaiwa_ joined #gluster
21:44 JoeJulian Look in /var/log/glusterfs/mnt-glusterfs.log for clues
21:45 JoeJulian on gluster2
21:45 MACscr the raw image files are being served by iscsi and i accidently renamed the wrong file while it was still being served through iscsi, so thats why you see the borked file. lol
21:45 MACscr its empty
21:45 MACscr on both systems
21:46 MACscr lol, thats weird
21:46 MACscr mnt-glusterfs.log.1 is the actual current file, but the regular one is empty from jan 19th
21:46 dblack joined #gluster
21:46 JoeJulian Ah, you rotated logs without a hup.
21:47 JoeJulian One more reason I always use copytruncate.
21:47 MACscr i didnt do anything special. so it only means the package sets them up wrong
21:47 MACscr gluster in general is horrible broken on ubuntu 14.04 lts
21:48 JoeJulian Anyway, I see another possible issue. Have you copied files from one brick to another behind gluster's back?
21:48 MACscr ive rsynced files
21:48 JoeJulian Bingo
21:48 MACscr not from one to the other though
21:49 MACscr only moved one of the systems to new storage
21:49 MACscr they both are simply linux containers
21:49 JoeJulian You missed the hardlinks and possibly the extended attributes.
21:49 JoeJulian Definately the hardlinks though.
21:49 MACscr so how should i resolve it?
21:50 MACscr i have all the originals
21:50 MACscr the problem is that most are in use right now and i cant touch them on gluster1
21:50 MACscr gluster2 is just hot standby for iscsi
21:51 MACscr i have two systems that cant boot though, so i could mess with them
21:53 JoeJulian I'm not sure which brick is broken. On each server try "find /var/gluster-storage/.glusterfs/?? -type f -links 1". If one has a bunch and the other doesn't, I would probably "rm -rf /var/gluster-storage/.glusterfs/??" and do a full heal.
21:54 JoeJulian s/and do/on the one with a bunch of singally linked files and do/
21:54 glusterbot What JoeJulian meant to say was: I'm not sure which brick is broken. On each server try "find /var/gluster-storage/.glusterfs/?? -type f -links 1". If one has a bunch and the other doesn't, I would probably "rm -rf /var/gluster-storage/.glusterfs/??" on the one with a bunch of singally linked files and do a full heal.
22:05 MACscr http://paste.debian.net/plain/378228
22:05 MACscr though gluster1 has the files that are in use
22:06 MACscr so im not sure how to handle that when they are both screwed up
22:18 ctria joined #gluster
22:26 theron joined #gluster
22:30 JoeJulian MACscr: gluster2 is the culprit.
22:31 JoeJulian I'm not sure what the dot files are for. They're relatively new.
22:34 MACscr so run 'rm -rf /var/gluster-storage/.glusterfs/??' on gluster2 and then run what?
22:35 JoeJulian heal full
22:36 farhoriz_ joined #gluster
22:37 MACscr root@gluster2:/var/log/glusterfs# gluster volume heal volume1 full
22:37 MACscr Launching heal operation to perform full self heal on volume volume1 has been unsuccessful
22:37 MACscr is that the right command?
22:37 JoeJulian yes
22:38 MACscr JoeJulian: but it wasnt successful. Is that because i have self heal on?
22:39 JoeJulian No
22:39 JoeJulian Not sure why it failed. You'll have to check your glusterd logs.
22:40 MACscr btw, way to many logs. pretty frustrating
22:40 MACscr different logs that is
22:40 JoeJulian Then use a log aggregator like I do. logstash is nice.
22:41 MACscr or just do it like every other project and dont go log crazy?
22:42 MACscr i dont know of any project that has so many different logs for a base install
22:42 JoeJulian I used to agree with you, but it would make it much harder to diagnose things if everything was all lumped together.
22:42 JoeJulian But hey, feel free to file a bug report.
22:42 glusterbot https://bugzilla.redhat.com/enter_bug.cgi?product=GlusterFS
22:42 JoeJulian I'm just a user like you.
22:43 MACscr i also find it a bit annoying that it doesnt use the servers date timezone
22:43 MACscr again, like every other log
22:44 JoeJulian Yeah, that would be fucking stupid.
22:44 JoeJulian In fact, all logs should be in UTC.
22:44 MACscr why?
22:44 JoeJulian Local timezones are just dumb.
22:44 MACscr lol, why?
22:44 ctria joined #gluster
22:45 JoeJulian Because when you're running a global network, you don't want to have to translate times all day long to find out which things happened at the same time.
22:45 JoeJulian tbh, any server I run operates in utc.
22:46 MACscr running UTC is the same as someone wanting to run them all at the same timezone as an HQ or whatever. no diff
22:46 MACscr with that said, all logs shoudl use the same timezone
22:46 JoeJulian The entire world should run in UTC and get rid of timezones.
22:46 MACscr that would make zero sense
22:46 JoeJulian Why?
22:46 MACscr i dont know, maybe the sun?
22:47 JoeJulian Does the sun follow a set schedule?
22:47 JoeJulian Man... I seem to be hostile today. Sorry.
22:48 MACscr np
22:48 EinstCrazy joined #gluster
22:48 MACscr i should be more angry with the state of my gluster, but im more defeated than anything right now
22:48 JoeJulian It's these stupid 20TB customer images. I told administration we shouldn't be allowing them and now they're pissing me off.
22:48 MACscr wish i would have nfsrooted my systems instead of iscsi on top of gluster as well. just another thing that can break
22:49 JoeJulian I've been trying to convince people we don't want to offer iscsi.
22:50 JoeJulian They think we need to offer everything a nutanix or emc offers. I disagree.
22:50 MACscr should i be looking at /var/log/glusterfs/glfsheal-volume1.log?
22:53 JoeJulian The glusterd log is /var/log/glusterfs/etc-glusterfs-glusterd.vol.log
22:53 JoeJulian So tell me, what problems has iscsi been causing you?
22:56 MACscr i dont even know enough to really say to be honest. I think its just more the maintenance of things. Have to remember to take the iscsi target offline before i maintain the raw image through nfs even if nothing is connected to the target
22:56 MACscr at least thats my theory
22:57 MACscr raw image is fine locally when i do a fsck on it, but then iscsi boot says there is a bad superblock
22:58 JoeJulian Hrm, interesting. I wonder why.
22:58 MACscr [glusterd-volume-ops.c:861:__glusterd_handle_cli_heal_volume] 0-management: Received heal vol req for volume volume1
22:58 JoeJulian I wonder if it does some read pre-caching or something.
22:58 MACscr doesnt really say there was a problem
22:58 JoeJulian Nope, what about the other one?
22:59 neofob left #gluster
22:59 MACscr no errors there either
23:00 MACscr 'gluster volume heal volume1 info' does show some things healing
23:00 MACscr but only 1
23:01 MACscr on both. same one
23:01 MACscr nvm, a few of them when ran from gluster1
23:01 MACscr so maybe i should just leave it?
23:02 JoeJulian Perhaps
23:02 JoeJulian Another possibility is to just run a "find" on one of the client mounts.
23:02 MACscr for?
23:02 JoeJulian Well, "find -exec stat {} \; >/dev/null"
23:03 JoeJulian Reads metadata output from every file, which does a self-heal check.
23:04 MACscr well gluster2 has nothing in its gluster fuse mount dir
23:04 MACscr which i find a bit odd
23:06 JoeJulian <sigh>
23:06 JoeJulian What if you stat a file that you know exists?
23:06 JoeJulian by name
23:09 post-fac1um joined #gluster
23:11 MACscr JoeJulian: sorry for being so novice, but what should i look for then? the heal status seems to be the same on both now, but just that file
23:12 MACscr also, why is my gluster mount on gluster2 empty? if i create a new file  on it, it shows up on gluster1, but gluster2 sill appears empty
23:12 cyberbootje joined #gluster
23:12 mobaer joined #gluster
23:12 ahino joined #gluster
23:29 rcampbel3 joined #gluster
23:30 MACscr JoeJulian: not sure if these mean anything either http://paste.debian.net/plain/378237
23:32 theron_ joined #gluster
23:32 nage joined #gluster
23:32 JoeJulian MACscr: kill the glusterfsd process on gluster2 and then "gluster volume start volume1 force"
23:33 haomaiwa_ joined #gluster
23:41 mobaer joined #gluster
23:42 plarsen joined #gluster
23:45 caitnop joined #gluster
23:49 theron joined #gluster
23:49 rcampbel3 sar shows about 70-90% of RAM used on my 2 glusterfs nodes with 3.5GB of RAM but CPU is low -- should I bump up instance sizes to add more ram?
23:50 B21956 joined #gluster
23:51 farhoriz_ joined #gluster
23:51 aea joined #gluster
23:52 MACscr ive found glsuter to be pretty resource intensive for a group of files with very little io
23:52 MACscr JoeJulian: thanks, that worked
23:58 gildub joined #gluster
23:58 calisto joined #gluster

| Channels | #gluster index | Today | | Search | Google Search | Plain-Text | summary