Access Nextcloud via SMB not updating with new files

Hello!

New to the forums and a rookie in setting up my own NextCloud. I’ve recently successfully setup my own Nextcloud docker container in TrueNAS scale.

When I try to access my Nextcloud “files” folders through SMB share, I can see and edit the files, but they they don’t appear to update on the web portal.

For example, I created a folder titled “Test_02_only-visible-on-SMB” but it doesn’t appear on the web GUI.

Any advice? Probably something in my setup is wrong, so any points are much appreciated! :slight_smile:

Thank you!

I have mine setup in truenas, but not the built in container orchestration in truenas.
I’m guessing you are trying to copy files directly to the smb share and expecting them to show up in nextcloud. that wont work. if youre dead set on that approach you would have to have something triggering a file scan in nextcloud. thats not going to be easy to work with.

ill explain how i have mine set up
I opted to set up a vm running in the truenas virtualization to run things in docker, at first because i didnt like the gui layer of managing containers because i am more used to manually configuring the settings and everything for each container. i did not want to run apps in the truenas containerization ui and wanted to limit the overhead of apps on the NAS by controlling the resources available to the vm but also because i already had another server handling docker the same way and i wanted to move off of that baremetal server and move all of my containers to this vm. i just prefer managing things at a lower level and running things in a compose file and managing updates and configuring things myself through command line. i’ve not done performance monitoring to have facts on this point, but it just feels like this is a much faster response time for things with it set up this way. i am currently disappointed how they are moving some truenas services to those containers anyway as truenas scale gets updates, but thats another topic…

I have an NFS share pointing at a dataset specific to nextcloud in trunas that is my nextcloud data home. This allows me to create full snapshots of just the data and utilize truenas for what its made for. ( i also have a separate nfs share for my own user in nextcloud thats mounted after the other main nfs share but thats a complicated story and ill keep this simple) In the VM i then have the NFS share set up in /etc/fstab to mount on startup and then in the docker compose volume mounts for nextcloud container i have it mapped to the location of the nfs mount in the vm. From there I use the nextcloud desktop sync app on my remote computers and the nextcloud app on my phone. I also have a smb share set up directly to that dataset in truenas in case i need to get to it and cant access nextcloud for some reason, however this is only if i need to verify something or pull a copy of a file down because i know if i modify those files that would make nextcloud not consistent.

Basically i feel youre where i was a while ago with setup and youre trying to have two cooks in the kitchen. you need to have one source of truth for the data, either nextcloud or use smb shares. and if you do need to use the smb share to add a file you need to keep that in mind and trigger a file scan. iirc they have an auto filesystem scan setting now but its never been quick enough for moving a file to the smb share and then trying to access it in nextcloud to share it out.

one other reason why i would suggest setting up a vm to run on truenas for containers is to keep them all self contained from truenas and easier to manage and move if needed. in addition to nextcloud in the containers i also have many other containers in use. i have mysql, redis, elasticsearch, all in separate containers in the vm as well, all in use by nextcloud and some of the other containers with their own namespaces configured in redis and elasticsearch. i found that using mysql and redis in separate containers significantly improved the overall response of the nextcloud server(again, i have no performance metric proof of this outside of my experience and the other couple users i have on nextcloud saying its extremely faster).

if you want to understand more of what im talking about with the file scan research these topics

file scan command
occ files:scan --all

config.php settings
‘filesystem_check_changes’ => ‘1’,

(note my setup is probably overly complicated for some and all of this is not necessary to run nextcloud. its simply what i prefer and allows me to have data consistency and backup images that suit my own needs. the basic answer to your question is to not use smb to move files to nextcloud, set up a folder on your client PCs that you want to sync with nextcloud and use the nextcloud desktop sync app. and if you do use smb dont expect it to show up quickly unless you trigger a file scan.)

hope all that helps!

1 Like

Thank you for the amazing and detailed reply! :pray:

This helps a ton, it also made me understand what I basically want is something much more simple. Something my clients can access and download the entire project at the end of a job.

What I was trying to avoid is having to duplicate my data on my NAS when sharing my projects using Next Cloud (how I used to use Dropbox, before they locked everything down on Macs, and removed support for ZFS too…)

I edit / color grade directly off of my NAS and projects are usually 500-700GB. Occasionally I shared the entire project with the client via Dropbox to download. What I used to do with dropbox was drag the whole project folder onto dropbox and let that sync/upload and then send a link.

So ideally I was trying to achieve the same with Next Cloud or something similar without having to duplicate the data. But it looks like this might not be an option? In which case, the next best thing I guess is to duplicate the data from my SMB share to Next Cloud using the desktop app, and wait for that to upload locally from my SMB share to my Next Cloud vdev.

The advantage of doing it this way “seems” to be it works easily, the disadvantage is I am essentially doubling the size of the project whenever I share it with the a client, and uploading from my SMB share to my Next Cloud data set, takes time.

Screenshot of my dataset for reference.
Screenshot 2023-09-01 at 2.31.39 PM

Again thank you for taking the time, do write all this out, definitely going to keep studying your detailed post!

*EDIT, I also thought maybe of getting NextCloud setup via AWS so that’s where my data is being duplicated to, but I am not sure if that’s a good idea, or just making the setup more complicated.

Thanks again!

Glad that helped. I have some other thoughts about your workflow that ill have to type up later when I get time. In short though yes duplication will likely be needed, for a couple reasons which ill get into when i have time.
Also note before you set up the desktop sync app - MAKE SURE YOU HAVE A BACKUP or snapshot of your data if thats your only copy. I’ve had occasions where nextcloud and the desktop app couldnt reach the data storage location and then completely deleted my local storage, and in your case would be your smb share.

1 Like

So regarding your workflow -
I do photography for fun. so my workflow is open lightroom, plug in the camera, offload in lightroom with the main destination being an iscsi drive on the nas which was mounted on the pc and then edit the photos from there.

I initially started using an iscsi mount to my pc with my photo applications on it and was trying to work from that and do my edits and things directly to the photos on the iscsi drive. seemed cool, and smart since i was putting the files right where the data redundancy was, and worked ok. Drawbacks were it was a bit limited on response in the photo apps because the iscsi drive was over the network. Another drawback was my storage drives are rust drives or physical spinny drives. Those are not the fastest, but i feel more comfortable with them for longer storage, and not to mention they are MUCH cheaper.

i finally got tired of the slower response and ended up expanding storage enough on my pc that i had enough storage to work with my entire photo collection on the pc and use the nas as a backup destination. As i became more familiar with using the nas and its functions i realized this is a much safer and better setup because previously i was relying on the nas as my working directory as well as a backup. this meant that my only real fallback were the snapshots being done, and what if there was some issue with saving or interacting with files over my local network, too many unexpected variables. ive done a couple weddings, and luckily have never lost anything but i wanted to protect myself from that happening.
my current result is working directly on my pc with my work, having an rsync schedule on the pc that pushes changes to an rsync endpoint on the nas that is a dataset of all my photos manually and on a schedule. its also MUCH faster working locally, yes the rust drives are slower but also the network latency even over a very fast connection was too much for me.

im not sure what application youre working with but lightroom it allows you to have a “published” folder which your end work you select gets pushed to. when i offload photos i offload to a main directory locally on the pc which is a folder structure of ““general pictures” > year > month”. from there i create a publish set in lightroom which is in a separate location from the general year >month offload do my work on them and then those are the ones i provide to the client or person or online location.

Where im going with this is, if your application you work in has a similar sort of publish folder location you could have that published location as your end result destination and then have nextcloud desktop app have a set of folders where you publish your work to, say different folders for each job. the nextcloud sync app will push this up to nextcloud and you can then share it out to your client from there, and at the same time you’ll have a safe working directory locally on your pc, and an “off PC” backup being pushed to the nas with an rsync script in a separate dataset as your backup. yes it duplicates things but for me i feel much more comfortable having this redundancy of local pc storage, and my backup, and the client share location. yes, im a bit of a data hoarder but i like my photos :smiley:
good luck and have fun!

One other thought regarding your mention of AWS… that cloud stuff is so expensive it doesnt make sense to me. i like doing things myself, i have the infrastructure to host things myself, i’d rather dump the cash into expanding storage and server resources and own my infrastructure than pay the clouds.

1 Like

I know youre probably tired of hearing from me lol, just today i recently started exploring another feature that I hadnt used before and i feel it might fit into your needs. Enabling the virtual file support has helped myself clear up a boat load of space for drone videos i didnt need on my local pc but still wanted to keep. After you enable it in the desktop sync app from the three dot menu
image

then you can go to a specific folder and then in the right click menu under nextcloud > youll have the option to “free up local space”. mine is dithered here because i already have this folder free’d up. the copy remains on nextcloud and only a placeholder remains local.
image

then if you need to access it again the nextcloud sync app pulls the file back down.
i of course took a snapshot before i did all this cause why not be safe… but it all enabled perfectly fine.

Also notice the “share options” in the right click menu, using that might make your workflow even faster so you can control sharing your work to your clients.

1 Like

Oh this is cool! And no not tired at all haha, still learning a lot so I very much appreciate you putting your time into this.

Right now, I gave up on trying to install Nextcloud using the TrueNAS app, and I am going to try running it as a virtual machine within TrueNAS.

If that still doesn’t work, I might when budget allows purchase a small intel nuc and use that for NextCloud, that should make port forwarding and communicating with TrueNAS via an NFT share way easier/stable.

But still early days, learning a lot.

couple more in depth things to note with the VM route. ive been meaning to post this adventure on my blog but havent yet lol

  • I have my vm on ssd drives and the data drive mapped to a dataset on rust drives - this seems to make responsiveness much more responsive
  • if you go that route with mounting a dataset over nfs that the VM mounts, on the nfs share in truenas i have the maproot and mapgroup set to root, this probably isnt best practice but its what i did to make mine work. i also have an authorized host listed in my nfs share and that is my local network ip range that the VM has. ( i cant recall if i had to set up a bridge network in truenas for my vm to also get a local ip on my network or if i had to do that for the vlans, anyway its best if your vm can get an IP from your router and appear as a device on your local network, same as the nas but different ip than the nas, hope that makes sense)
  • the permissions of files in the dataset need to have the owner:group of www-root, im not completely sure if this is needed but i seemed to have some odd things happen if the permissions were not set that way. I can still interact with files over smb but i believe i have that mounted in windows through the ACL in truenas and it seems to work ok for alternative access in rare cases but i still let the desktop sync app handle things.
  • my fstab entry on the vm to mount the dataset looks like the below, where x.x.x.x is the ip of truenas. this has worked well except after a reboot. ive been having trouble with mounts not mounting before docker wants to get going and then nextcloud will start up and not see the files on the nfs share since its not mounted and be all confused. ive tried to add delays in docker, in systemd, and a number of other things but cant seem to get the timing right. to get it to work again i have to stop the nextcloud container, make sure the nfs share is mounted, then start up nextcloud. then if it did start up successfully trigger a full file scan again, which in doing so i lose my favorite files, history, and recent files listings but everything else is ok. i currently have it set somehow to fail docker startup if it cant reach the mounts, which is fine, i rarely reboot and i can start docker services and containers manually if its down. i cant find in my notes how i did that at the moment, i think it was in systemd. if i can find it ill reply back later.
    the fstab entry:
    x.x.x.x:/mnt/tank/servers/nextcloud/data /mnt/containers/nextcloud/data nfs defaults,_netdev 0 0
    if you have all SSDs and not going to do this silly stuff im doing with nfs shares you likely dont even need to worry about all this lol
  • this is part of my compose file for docker that has the relevant bits for nextcloud in the vm. note im using a mix of a docker .env file and docker secrets for a couple things in here. i also have onlyoffice in here and thats not needed but you might want to look into that down the road. i love it. your ports and volume mounts will likely differ
  nextcloud:
    container_name: nextcloud
    image: nextcloud:27.0.2
    privileged: true
    restart: unless-stopped
    ports:
      - 8080:80
    volumes:
      - /mnt/containers/nextcloud/html:/var/www/html
      - /mnt/containers/nextcloud/apps:/var/www/html/custom_apps
      - /mnt/containers/nextcloud/config:/var/www/html/config
      - /mnt/containers/nextcloud/data:/var/www/html/data
    environment:
      - POSTGRES_HOST=postgres
      - POSTGRES_DB_FILE=${nextcloud_postgres_db}
      - POSTGRES_USER_FILE=${nextcloud_postgres_user}
      - POSTGRES_PASSWORD_FILE=${nextcloud_postgres_password}
      - NEXTCLOUD_ADMIN_PASSWORD_FILE=${nextcloud_admin_password}
      - NEXTCLOUD_ADMIN_USER_FILE=${nextcloud_admin_user}
      - PHP_MEMORY_LIMIT=6G
    depends_on:
      - postgres
      - redis
      - onlyoffice
  postgres:
    container_name: postgres
    image: postgres
    restart: always
    ports:
      - 5432:5432
    volumes:
      - /mnt/containers/postgres/db:/var/lib/postgresql/data
    environment:
      - POSTGRES_DB_FILE=/run/secrets/postgres_db
      - POSTGRES_USER_FILE=/run/secrets/postgres_user
      - POSTGRES_PASSWORD_FILE=/run/secrets/postgres_password
    secrets:
      - postgres_db
      - postgres_password
      - postgres_user
  redis:
    container_name: redis
    image: redis:latest
    restart: always
    ports:
      - 6379:6379
    volumes:
      - /mnt/containers/redis:/data
  onlyoffice:
    container_name: onlyoffice
    image: onlyoffice/documentserver:7.4
    restart: always
    ports:
      - 7021:80
    volumes:
      - /mnt/containers/onlyoffice/DocServer/logs:/var/log/onlyoffice
      - /mnt/containers/onlyoffice/DocServer/data:/var/www/onlyoffice/Data
      - /mnt/containers/onlyoffice/DocServer/lib:/var/lib/onlyoffice
      - /mnt/containers/onlyoffice/DocServer/db:/var/lib/postgresql
    environment:
      - DB_TYPE=postgres
      - DB_HOST=postgres
      - DB_NAME=onlyoffice_db
      - DB_USER=${oo_postgres_user}
      - DB_PWD=${oo_postgres_password}
      - USE_UNAUTHORISED_STORAGE=true
      - JWT_ENABLED=true
      - JWT_SECRET=${oo_jwt_secret}
      - JWT_HEADER=AuthorizationJwt
      - JWT_IN_BODY=true
    depends_on:
      - postgres
  • i have nginx running on another server acting as the reverse proxy which is the public facing part of all of this (it also has cloudflared on it too which is limited to only the nginx container). if you have that set up it will need to proxy the connection from external over to your vm, youll also have to have the trusted_proxies and trusted_domains configured in the nextcloud config.php

have fun!

1 Like

Hey! Just wanted to give you an update. After a lot of watching and following a lot of tutorials I came to the conclusion that TrueNAS Scale should just stay as storage and thats it. I am not using a secondary machine for NextCloud which I setup successfully using Docker and Portainer. Currently it’s accessible on cloud.denisediting.com and seems to work ok. The only issue I have, is I am unable to upload anything bigger than a couple MBs. If you have any pointers of where I should be looking in my NextCloud config that would be a huge help. Again thank you for all your support so far :slight_smile:

Just to update on this, I saw a thread about editing the maxChunkSize=50000000 which I did on both the config files on my server (running PopOS) and on my Mac, and the file seems to be uploading now even if its a couple gigabytes.

Its still slower than I would like, so still need to keep searching ways to optimize it.