how much bandwidth will a database connection eat over broadband(512)?
#1
We presently run a 512k broadband connection at work for email and www access. Its more than enough bandwidth for what we use it for.
We're considering linking a remote office to our server via VPN but I'm a little worried as to how much bandwidth this will take out of our 512k line.
Does anyone have any experience of this ?
Is it possible to limit the bandwidth of a specific application so it doesn't use the whole lot ?
-DV
We're considering linking a remote office to our server via VPN but I'm a little worried as to how much bandwidth this will take out of our 512k line.
Does anyone have any experience of this ?
Is it possible to limit the bandwidth of a specific application so it doesn't use the whole lot ?
-DV
#2
...perhaps I should add - the remote office consists of only a couple of PC's, one of which would connect to the main office over a WAN (broadband512k).
The machine runs an MS ACCESS app which would talk to our server over the WAN.
Before you say anything - yes, I bloody hate MS ACCESS too and its nothing to do with me!!!
The machine runs an MS ACCESS app which would talk to our server over the WAN.
Before you say anything - yes, I bloody hate MS ACCESS too and its nothing to do with me!!!
#3
Scooby Regular
Join Date: Sep 2001
Location: Bangor, Northern Ireland
Posts: 2,033
Likes: 0
Received 0 Likes
on
0 Posts
i can't answer your question specifically but obviously we would need to know the volume of hits and the amount of data being queried to give an idea of the drain on the connection (and of course over what time frame). it would be pretty tricky to work it all out...try and see i guess
#4
Moderator
iTrader: (2)
Run Exchange & an SQL based prog over VPN to 2 offices via a 512K ADSL
That works fine with little or now problems but could maybe be faster
Also run webserver for e-booking on the same line but traffic prob not massive tbh...
An amount of file sharing, remote workers using Sage Line 50, VNC & "seems" to be OK if a tad (not much) sluggish at times.
Also access an access db sometimes with another prog without much probs, but can't say specific to yours if relational due to the potential traffic generated.
Suck & see - you can only get the budget for a bigger pipe
That works fine with little or now problems but could maybe be faster
Also run webserver for e-booking on the same line but traffic prob not massive tbh...
An amount of file sharing, remote workers using Sage Line 50, VNC & "seems" to be OK if a tad (not much) sluggish at times.
Also access an access db sometimes with another prog without much probs, but can't say specific to yours if relational due to the potential traffic generated.
Suck & see - you can only get the budget for a bigger pipe
#5
Scooby Regular
Join Date: Sep 1999
Location: Bedfordshire
Posts: 4,037
Likes: 0
Received 0 Likes
on
0 Posts
Daz,
Yep it all depends on how large the recordset you are returning is, its all about the initial design of the app. really, a good client-server app. should return just the amount of data needed only when its needed, if the user can build their own searches you should always try and limit the amount of rows, but I guess if you know your not going to return thousands it wont be a problem!
Gary
Yep it all depends on how large the recordset you are returning is, its all about the initial design of the app. really, a good client-server app. should return just the amount of data needed only when its needed, if the user can build their own searches you should always try and limit the amount of rows, but I guess if you know your not going to return thousands it wont be a problem!
Gary
Trending Topics
#8
Thanks all
Yep, SJSKyline - thats what I'd personally love to do but its a 3rd party bespoke app and I'm not touching it. It gives us enough problems already.
So limiting the apps bandwidth IS possible ? We're running a hardware firewall and router. I'll go off and research QoS and see how to set that up.
Yep, SJSKyline - thats what I'd personally love to do but its a 3rd party bespoke app and I'm not touching it. It gives us enough problems already.
So limiting the apps bandwidth IS possible ? We're running a hardware firewall and router. I'll go off and research QoS and see how to set that up.
#9
Scooby Regular
The only problem you may have is the level at which QoS works.
For example at the lowest (least intelligent level) it may allow you to restrict bandwidth (or at least set a guaranteed bandwidth) for IP addresses. So, you could say I want to gaurantee at least 100K to the IP address of your MS Access server. The router won't then be able to know if the traffic is HTTP, generic TCP/UDP or specifically MS Access.
Some routers allow you to do QoS on protocols, but if they are designed for Internet usage this may be restricted to just HTTP, FTP, TCP, UDP and DNS to name but a few.
There are 3rd-party products (Packateer springs to mind) that do QoS at a much higher (and more intelligent) level, like Outlook, Lotus Notes, SQL, SMTP, POP3, etc.. Basically it allows you to do QoS right down to specific applications.
The problem with MS Access is you would need a drive mapping (or UNC path) connection from the remote office. This would probably restrict what QoS settings you can use (if any), although it may allow you to limit other traffic such as HTTP, SMTP and POP3. By reducing the other traffic in a roundabout way you have guaranteed the rest of the bandwidth to file sharing connections.
I use a VPN across a 256Kb leased line and it's reasonable for file sharing, although I'd hate to have to rely on it. It does grind to a hault if the bandwidth is being hogged by someone downloading off the net.
Stefan
For example at the lowest (least intelligent level) it may allow you to restrict bandwidth (or at least set a guaranteed bandwidth) for IP addresses. So, you could say I want to gaurantee at least 100K to the IP address of your MS Access server. The router won't then be able to know if the traffic is HTTP, generic TCP/UDP or specifically MS Access.
Some routers allow you to do QoS on protocols, but if they are designed for Internet usage this may be restricted to just HTTP, FTP, TCP, UDP and DNS to name but a few.
There are 3rd-party products (Packateer springs to mind) that do QoS at a much higher (and more intelligent) level, like Outlook, Lotus Notes, SQL, SMTP, POP3, etc.. Basically it allows you to do QoS right down to specific applications.
The problem with MS Access is you would need a drive mapping (or UNC path) connection from the remote office. This would probably restrict what QoS settings you can use (if any), although it may allow you to limit other traffic such as HTTP, SMTP and POP3. By reducing the other traffic in a roundabout way you have guaranteed the rest of the bandwidth to file sharing connections.
I use a VPN across a 256Kb leased line and it's reasonable for file sharing, although I'd hate to have to rely on it. It does grind to a hault if the bandwidth is being hogged by someone downloading off the net.
Stefan
#10
Scooby Regular
Daz is actually going to be running this between 2 SonicWALLs which have the capability to run a limited form of bandwidth management. It will be sufficient to guarentee a minimum for the IPSec VPN tunnel that the Database will run over.
As an aside anyone who uses SonicWALLs maybe interested in
www.sonicusers.com
As an aside anyone who uses SonicWALLs maybe interested in
www.sonicusers.com
#12
Scooby Regular
Short term answer is using something like CITRIX. You can run this over something as slow as a 56k dial-up line and performance is good. We use this method for getting all kinds of applications including heavy oracle based systems over to remote offices.
Long term answer is Web.
HTH
Shaun.
Long term answer is Web.
HTH
Shaun.
#14
Why a web front end ?
Is it any more efficient to return a large recordset via HTTP than it is to return it direct to Access ? Not really.
The whole premise of web based apps is that they are more efficient, but they have to be made more efficient.
Banging a web front end on an existing db won't do that for you.
Citrix is an expensive solution with the cost of a server, Win2K licence, MF XP is almost no cost, but client licences are expensive.
You could just plump for Terminal Services on Win2K using a full desktop connection. ICA protocol bolts on the top of this anyway.
Expensive to put a Terminal Server up though.
A PC at your main office just for Remote Desktop access is the cheapest way to get your user connected from the remote site, and will work fine within a 512k link. The Access db would reside on there. It's only display data being sent across the VPN then.
Is it any more efficient to return a large recordset via HTTP than it is to return it direct to Access ? Not really.
The whole premise of web based apps is that they are more efficient, but they have to be made more efficient.
Banging a web front end on an existing db won't do that for you.
Citrix is an expensive solution with the cost of a server, Win2K licence, MF XP is almost no cost, but client licences are expensive.
You could just plump for Terminal Services on Win2K using a full desktop connection. ICA protocol bolts on the top of this anyway.
Expensive to put a Terminal Server up though.
A PC at your main office just for Remote Desktop access is the cheapest way to get your user connected from the remote site, and will work fine within a 512k link. The Access db would reside on there. It's only display data being sent across the VPN then.
#15
Scooby Regular
I think the point being made about web front ends is that the interface is very lightweight (i.e. a browser) and there is minimum data transfer between the remote client and the main office. For that reason, a web front end does make it a more efficient use of the availbale bandwidth and it would also make it more expandable than a simple XP remote desktop.
Remote desktop is a no-thrills solution, but remember the remote user is taking control of a PC on the LAN, so you need to work around PC at the main office actually being available to use and it's no good once you start adding more remote users.
Terminal services is good, but it only supports a full remote desktop. Citrix is more expensive, but allows individual Published Applications and even the ability to allow any application to be accessed via a standard web-browser. Obviously the costs will rack up quickly with a setup like this, so it's a balancing act between usability, flexability and costs.
I worked with Terminal Services/Citrix for about 4 years (even since the early days of WinFrame), but one thing I always did was too see just how much data was being transferred between clients and servers. I remember setting up a remote user with access to a simple DOS accounting package across an ISDN line and just loading the front screen required over 1Mb of data to be read from the server. Over a 128Kb ISDN line it still took nearly a minute just to get into the app.
I would suggest running some tests and monitoring the data read from and written to the database by a user to give you an idea how it may affect the bandwidth.
Stefan
Remote desktop is a no-thrills solution, but remember the remote user is taking control of a PC on the LAN, so you need to work around PC at the main office actually being available to use and it's no good once you start adding more remote users.
Terminal services is good, but it only supports a full remote desktop. Citrix is more expensive, but allows individual Published Applications and even the ability to allow any application to be accessed via a standard web-browser. Obviously the costs will rack up quickly with a setup like this, so it's a balancing act between usability, flexability and costs.
I worked with Terminal Services/Citrix for about 4 years (even since the early days of WinFrame), but one thing I always did was too see just how much data was being transferred between clients and servers. I remember setting up a remote user with access to a simple DOS accounting package across an ISDN line and just loading the front screen required over 1Mb of data to be read from the server. Over a 128Kb ISDN line it still took nearly a minute just to get into the app.
I would suggest running some tests and monitoring the data read from and written to the database by a user to give you an idea how it may affect the bandwidth.
Stefan
#16
I think the point being made about web front ends is that the interface is very lightweight (i.e. a browser) and there is minimum data transfer between the remote client and the main office. For that reason, a web front end does make it a more efficient use of the availbale bandwidth and it would also make it more expandable than a simple XP remote desktop.
It's a real misconception that a web front end is efficient simply because it uses HTTP.
The app has to be written in an efficient manner for it to be effective, usually with some processing being done back end before limited results are sent to the browser.
Data is data whatever way u send it.
#17
DSOTM,
My preference for an HTTP front-end is due to a mixture of what Ozzy says and the opportunity for me to address the problems you mention - eg. needlessly passing large quanitities of data up and down the WAN slowing the whole thing to a crawl.
Using an http interface would allow me to optimise the whole design and get at the limited data I would need. I would imagine there's a huge room for bandwidth improvement over the current application's design.
*HOWEVER* its 3rd party, it appears to be held together with string, and I'm not touching it - so bang goes that idea.
I'm going to evaluate the QoS side of things and see what impact it has.
Thanks for your input though - some interesting stuff mentioned which I'll have to 'research'
-DV
My preference for an HTTP front-end is due to a mixture of what Ozzy says and the opportunity for me to address the problems you mention - eg. needlessly passing large quanitities of data up and down the WAN slowing the whole thing to a crawl.
Using an http interface would allow me to optimise the whole design and get at the limited data I would need. I would imagine there's a huge room for bandwidth improvement over the current application's design.
*HOWEVER* its 3rd party, it appears to be held together with string, and I'm not touching it - so bang goes that idea.
I'm going to evaluate the QoS side of things and see what impact it has.
Thanks for your input though - some interesting stuff mentioned which I'll have to 'research'
-DV
#18
Scooby Regular
DSOTM,
That was exactly my point and DV needs to determine just how much data will travel to/from the client.
I'm no Access expert (he's on lunch), so can't comment on how you could put an efficient web front-end onto a Access database.
If you did want to put a web front-end onto any Windows application without worrying about back-end processing or data travelling up/down the wire, then stick Citrix and Terminal Services on the main network and it'll do that in about 10mins.
Stefan
Data is data whatever way u send it.
I'm no Access expert (he's on lunch), so can't comment on how you could put an efficient web front-end onto a Access database.
If you did want to put a web front-end onto any Windows application without worrying about back-end processing or data travelling up/down the wire, then stick Citrix and Terminal Services on the main network and it'll do that in about 10mins.
Stefan
Thread
Thread Starter
Forum
Replies
Last Post