CrazyEngineers
  • Accelerating content delivery over the Web - Ideas?

    Kaustubh Katdare

    Administrator

    Updated: Oct 26, 2024
    Views: 1.1K
    This one should tickle the brains of lots of computer science engineers over here.

    Apart from using web accelerators (which primarily do the catching), can there be fundamental changes in the way content is delivered over the Internet? I mean, recent introduction of SPDY protocol by Google made me think that the whole working of Internet needs a re-look.

    Internet has evolved a lot during last few years and the underlying technologies can be worked upon.

    How do you think we can speed up the content delivery on the Internet? Is reworking HTTP the only solution? Think! 😁

    [PS: No 'use faster broadband bullshit in this thread please. Keep the context in mind before you post here.]
    0
    Replies
Howdy guest!
Dear guest, you must be logged-in to participate on CrazyEngineers. We would love to have you as a member of our community. Consider creating an account or login.
Replies
  • dragon.rider

    MemberNov 13, 2009

    What kind of content do u mean ?
    Big chunks of files eg movies or small file sharing eg. pdf,doc ?
    Are you sure? This action cannot be undone.
    Cancel
  • sarveshgupta

    MemberNov 13, 2009

    Great issue to talk about Biggie

    Was Thinking about same thing these days

    Content Delivery really needs a boost up
    Are you sure? This action cannot be undone.
    Cancel
  • Kaustubh Katdare

    AdministratorNov 13, 2009

    Web content is any content. In today's world, most of it is delivered through the hyper text transfer protocol. Now can we really think of ways to improve the speed by making fundamental changes in the way this transfer works.

    This is similar to our earlier discussion on how videos can be transferred over the internet faster on slower connections.
    Are you sure? This action cannot be undone.
    Cancel
  • Ashraf HZ

    MemberNov 13, 2009

    I'm not much of an expert on the application layer (in which HTTP and SPDY will operate at), but we still have a lot of room for improvement on the physical up to the data link. Efficiencies of routing and better error control algorithms will improve overall data rate. With IPv6 on the way, jumbograms can be used to pack a lot of data in fewer packets.. so one doesn't have to wait longer for the packets to arrive for a given content. How well this works out still depends on the hardware of the networks of course.
    Are you sure? This action cannot be undone.
    Cancel
  • Guttu

    MemberNov 13, 2009

    I agree with ash. Hardware should be improved. Also moving to grid computing will also help. Developing and implementing better content compression algorithm is another way. Though it won't work for media files.
    Are you sure? This action cannot be undone.
    Cancel
  • Kaustubh Katdare

    AdministratorNov 13, 2009

    Hardware improvement is no doubt a good option. We can really get speedy transfers with thicker connections. I'm wondering can HTTP be evolved?
    Are you sure? This action cannot be undone.
    Cancel
  • sarveshgupta

    MemberNov 13, 2009

    @Guttu: Why the compression algorithms won't work for media files?
    Are you sure? This action cannot be undone.
    Cancel
  • sarveshgupta

    MemberNov 13, 2009

    @Biggie: Don't you think SPDY has evolved HTTP to next level

    SPDY has tried to remove the drawbacks of HTTP by allowing for multiplexed streams, header compression, server push and server hint.

    The one thing is that they have targeted the application layer to avoid any major structural changes for the implementation of this algorithm
    Are you sure? This action cannot be undone.
    Cancel
  • Guttu

    MemberNov 14, 2009

    sarveshgupta
    @Guttu: Why the compression algorithms won't work for media files?
    As far as I know media files are already compressed. The current compression algorithms like zip or gzip don't make any difference. And even if new algorithms are created for better compression they will use very high resources, which in turn will need hardware upgrade.
    Are you sure? This action cannot be undone.
    Cancel
  • Kaustubh Katdare

    AdministratorNov 14, 2009

    Well, I guess the fundamental change I'm talking about will come only when we can think of the problem HTTP solves - and then build our ideas upon how that problem can be solved in some other way.
    Are you sure? This action cannot be undone.
    Cancel
  • Ashraf HZ

    MemberNov 14, 2009

    Biggie, if you wish to restrict the discussion at the application layer, then I think you need to make that clear in the topic title. There is no point of talking about anything else right? 😉
    Are you sure? This action cannot be undone.
    Cancel
  • sarveshgupta

    MemberNov 14, 2009

    Guttu
    As far as I know media files are already compressed. The current compression algorithms like zip or gzip don't make any difference. And even if new algorithms are created for better compression they will use very high resources, which in turn will need hardware upgrade.
    How it is that media files are already compressed? I mean to ask what is used to compress them in that initial phase?

    And why can't we apply zip or gzip to media files? Can you please provide more insights into this?
    Are you sure? This action cannot be undone.
    Cancel
  • Guttu

    MemberNov 18, 2009

    sarveshgupta
    How it is that media files are already compressed? I mean to ask what is used to compress them in that initial phase?

    And why can't we apply zip or gzip to media files? Can you please provide more insights into this?
    There are codecs like MPEG-1, MPEG-2, MPEG-3, MPEG-4, xvid, divx, h264 which compress the original stream. The original or uncompress stream takes GB's of space.

    Just try to zip or gzip a media file. Check the results for yourself.
    Are you sure? This action cannot be undone.
    Cancel
Home Channels Search Login Register