[BUG-134167] Script Memory Cap Increase as a Premium Feature? - Thoughts from the Aug 22, 2017 Server User Group #2205
Comments
Chaser Zaks commented at 2017-08-23T20:03:47Z
Only people who know what they are doing would know how to enable this.
If they go above this, one of two things can happen:
|
Kyle Linden commented at 2017-09-06T18:06:11Z Hi Jasdac, Thank you for your suggestion. The team has reviewed your request and determined that it is not something we can tackle at this time. Please be assured that we truly appreciate the time you invested in creating this feature request, and have given it thoughtful consideration among our review team. This wiki outlines some of the reasoning we use to determine which requests we can, or can't, take on: http://wiki.secondlife.com/wiki/Feature_Requests Thanks again for your interest in improving Second Life. |
How would you like the feature to work?
(Oz told me it was ok to make a new JIRA post about this)
Allow users to select a higher memory cap through a drop down on compile time. I suggest a 16/32/64 etc cap, with the highest option preferably not lower than 256k.
Being able to select a memory limit above 64k could be limited to premium users. I for one would finally get a premium account if this was added.
Why is this feature important to you? How would it benefit the community?
Issues this would solve in comparison to today's practice of using many small scripts instead of few larger ones:
Less overall memory use, since a script consumes memory by just having a state and an even listener.
Less script time usage due to less string parsing when passing data between scripts. Right now to communicate between scripts in a linkset your primary option is to use llMessageLinked. If you need to pass lists (which you usually do), you'll have to do string parsing which is much slower than using lists directly. Especially when you wind up with many scripts in a project. Then all scripts with a link_message handler have to check each message to see if it was targeted towards them, which costs additional script time.
Less asynchronous issues. One of the pains of LSL development is that there are no anonymous functions that can be used for callbacks, making development a pain if many scripts need to share up to date data. Not having to split scripts would remedy that.
Potential issues this would create, and suggested fixes:
Issue: Feature is premium only. A premium user compiles a script using 256k and sends it to a non-premium user. What happens now?
Suggested Fix: The non-premium user can use the script as is, but won't be able to re-compile it without a premium account. If the script is moddable and they copy paste the code into a new script, it will be treated by their compile settings, potentially generating a stack overflow if it consumed too much memory. Likewise if the user's premium expires. They can still use the script but not compile it until they restore premium.
Issue: Beginner scripters setting memory cap to max, even if they don't need it.
Suggested Fix: Have the options in the drop down default to, and show "recommended" 64k. If changed to anything higher, add a small notice akin to "This is a high memory allocation. If you're not sure what this is, leave it at 64k (default)". Or do like you did for mesh import, having to complete a short questionnaire before utilizing the feature. There were complaints about nobody using llSetMemoryLimit, but most people don't know how that feature works or what it does. Putting a memory limit option directly in the compile window with a hover tooltip would help.
Issue: Legacy viewers!
Suggested Fix: If no memory limit selection is sent to the server on compile time on a mono script, it should compile with the default 64k**
Issue: Would this work for mono only?
Suggested Fix: Probably yeah. People today sometimes say they compile things as LSL2 to save memory. If they were able to limit mono to 16k or lower for their tiny scripts, there's no real need to compile as LSL2 other than for legacy purposes?
Issue: With 4 times more script memory, it would be 4 times easier for abusers to flood the region's allocated memory.
Suggested fix: Limit the feature to premium accounts. People will be less prone to abuse if there's a monetary investment in it.
Issue: What about limiting this to experiences?
Solution: Please no, if these improvements help reduce load on the region, it would be to everyone's benefit if projects could utilize this anywhere in the world.
Issue: A higher memory cap would slow region crossings.
Solution: Oz mentioned this. Currently scripters go around the memory limit by splitting their project into multiple scripts. Would a 256k script cause more region crossing delay than 4x 64k scripts? And if so, by how much?
Links
Related
Original Jira Fields
The text was updated successfully, but these errors were encountered: