Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-20 rejected under 35 U.S.C. 102(a)(1) based upon a public use or sale or other public availability of the invention. Claims 1-20’s contents are taught previously in the content found in the Unity Documentation, which is the official software documentation for the game engine Unity. The documentation and the software are authored by Unity Technologies. Specifically, we cite the various sections of the Unity 2022.1 Documentation, here on out referred to as UD. As the UD is divided into two halves, a user manual and a scripting API reference, the 2022.1 user manual for documentation is referred to as UM, and the scripting API reference will simply be called API.
Regarding independent claims 11, 1, 16
The Unity Documentation teaches:
A system of creating physics-based content, comprising: at least one processor; and at least one memory communicatively coupled to the at least one processor and comprising computer-readable instructions that upon execution by the at least one processor cause the at least one processor to perform operations comprising (UD/UM/Working in Unity/Installing Unity/System requirements for Unity 2022.1:
PNG
media_image1.png
542
1530
media_image1.png
Greyscale
As shown above, a processor/CPU is required along with an operating system, since an OS needs memory to be stored on and use memory is also a requirement which teaches all system components in the application.) establishing a set of physics controller nodes, wherein the set of physics controller nodes are configured to refine physics simulations in a three-dimensional (3D) environment; (UD/UM:
PNG
media_image2.png
176
986
media_image2.png
Greyscale
“Use the Unity Editor to create … 3D games, apps and experiences”;
UD/UM/Physics:
PNG
media_image3.png
630
1516
media_image3.png
Greyscale
"Unity helps you simulate physics in your Project to ensure that the objects correctly accelerate and respond to collisions, gravity, and various other forces … You can achieve some basic physics goals with the user interface,” Note: Here we see the documentation teach that the editor can make physics simulations in a 3D environment, and that it can do so using a user interface, more on which is discussed below.) presenting user interfaces configured to implement visual scripting based on the set of physics controller nodes; (UD/API/Packages and feature sets/Visual Scripting:
PNG
media_image4.png
274
1334
media_image4.png
Greyscale
“Visual scripting is a workflow that uses visual, node-based graphs to design behaviors rather than write lines of C# script. Enabling artists, designers and programmers alike, visual scripting can be used to design final logic, quickly create prototypes, iterate on gameplay and create custom nodes to help streamline collaboration.” Note: Unity’s node-based visual scripting workflow is an example of a user interface, as it provides a visual means for users to interface with the editor’s functions. It teaches the application’s UI that uses “visual scripting based on the set of physics controller nodes”. As stated in the quote Unity’s visual scripting is node based, and nodes come from C# script, scripts for Unity are detailed in the Scripting API Reference. UD/API/UnityEngine/Classes/Rigidbody:
PNG
media_image5.png
514
1336
media_image5.png
Greyscale
Note: The methods shown above come from our scripting API reference and are therefore also usable nodes for visual scripting. We can see the nodes contain physics nodes such as AddForce, AddTorque, etc… We can also see the methods are for Rigidbody, which is described as “control of an object’s position through physics simulation” higher up in the same page as the image, or in other words how the physics simulation perceives objects. Since we have access to nodes which control forces in a physics simulation this teaches the application’s “physics controller nodes”)
customizing the physics simulations by utilizing the set of physics controller nodes based on user input received via the user interfaces; (Note: The UD teaches the physics controller nodes and the visual scripting UI which they exist in as shown in the most recent citation. These physics nodes taught in unity apply forces on objects in a physics simulation, meaning we can customize the physics simulation with the nodes via user input from our visual scripting UI, teaching this portion of the claim.) and creating content based on the customized and optimized physics simulations. (UD/UM: “Use the Unity Editor to create … 3D games, apps and experiences” UD/UM/Physics: "Unity helps you simulate physics in your Project…” These teach that Unity can make custom content based on physics simulations, and as our previous citation teaches the customization of the physics simulation this claim portion is taught by the UD.)
Regarding Claims 12, 2, 17, dependent on 11, 1, 16
The Unity Documentation teaches,
The system of claim 11, the operations further comprising: implementing dynamic changes in a speed and direction of an object in the 3D environment and managing an acceleration of the object using an acceleration controller node in the set of physics controller nodes. (UD/API/UnityEngine/Classes/Rigidbody.AddForce:
PNG
media_image6.png
734
1522
media_image6.png
Greyscale
Above we see the node/method AddForce, which is for Rigidbodys. UD/API/UnityEngine/Class/Rigidbody states a rigidbody is “control of an object’s position through physics simulation”, or in other words how the physics simulation views the objects. Here we see AddForce can apply forces along the direction of the vector parameter, and that if the acceleration mode is enabled it “allows the type of force to be changed to an Acceleration”, which teaches the application’s acceleration controller node.
Regarding claims 13, 3, 18, dependent on 11, 1, 16
The system of claim 11, the operations further comprising: simulating realistic impacts and movements by applying instantaneous forces to an object in the 3D environment using an impulse node in the set of physics controller nodes. (” UD/API/UnityEngine/Classes/Rigidbody.AddForce:
PNG
media_image6.png
734
1522
media_image6.png
Greyscale
The physics node documentation for AddForce details a mode parameter which specifies the “Type of Force to apply” as seen above under the Parameters table. In the Description we see if the impulse mode is selected it “Interprets the parameter as an impulse”, or in other words the force we are introducing will be an impulse, teaching this application’s impulse node.)
Regarding claims 14, 4, 19, dependent on 11, 1, 16
The Unity Documentation teaches:
The system of claim 11, the operations further comprising: implementing sustained movements or interactions by applying continuous forces to objects in the 3D environment using a force controller node in the set of physics controller nodes. (UD/API/UnityEngine/Classes/Rigidbody.AddForce:
PNG
media_image6.png
734
1522
media_image6.png
Greyscale
The AddForce node, when not modified by a mode parameter will default to the approach that “Force is applied continuously” to objects, teaching the application’s force controller node.)
Regarding claims 15, 7, 20, dependent on 11, 1, 16,
The Unity Documentation teaches:
The system of claim 11, the operations further comprising: projecting a ray and facilitating line-of-sight interactions and distance measurements in the 3D environment using a ray cast node in the set of physics controller nodes. (UD/UM/Graphics/Cameras/Camera Tricks/Rays from the Camera:
PNG
media_image7.png
394
1432
media_image7.png
Greyscale
UD/API/Unity Engine/Classes/RaycastHit:
PNG
media_image8.png
770
1134
media_image8.png
Greyscale
The above Rays from the Camera manual section teaches not only the use of ray casting but leveraging it specifically with nodes like ScreenPointToRay and ViewportPointToRay to cast rays from a “camera”, as we are able to use nodes to cast rays from a point of view, line of sight, or “camera” this teaches the ability to “facilitate line of sight interactions” as it is called in this application. In the RaycastHit image above we see a RaycastHit object is returned on every ray cast, in the properties table we see it will return with info on distance, teaching the application’s distance measurement via a ray cast node.)
Regarding claims 5, dependent on claim 1,
The Unity Documentation teaches:
The method of claim 1, further comprising: implementing movements and behaviors by directly controlling a velocity of an object in the 3D environment using a velocity controller node in the set of physics controller nodes. (UD/API/UnityEngine/Classes/Rigidbody.AddForce:
PNG
media_image6.png
734
1522
media_image6.png
Greyscale
Above we again see the AddForce node/method, which as we have seen previously applies forces to objects in a physics simulation. We cans see in the description if the VelocityChange mode is enabled it “interprets the parameter as a direct velocity change”, teaching this application’s velocity controller node.)
Regarding claims 6, dependent on claim 1,
The Unity Documentation teaches:
The method of claim 1, further comprising: detecting collisions between objects and triggering particular responses in the 3D environment using a collision event node in the set of physics controller nodes. (UD/UM/Physics/Built-in 3D Physics/Collision/Introduction to collision:
PNG
media_image9.png
134
1516
media_image9.png
Greyscale
UD/API/UnityEngine/Classes/Collider.OnCollisionEnter(Collision):
PNG
media_image10.png
422
1434
media_image10.png
Greyscale
As shown in the intro to collision section of the manual above the “scripting system can detect when collisions occur and initate actions using the OnCollisionEnter function”. Above we see the OnCollisionEnter function in our scripting API, showing it is usable as a node. As the OnCollisionEnter node/function allows for responses to collisions to be triggered when detected it teaches the application’s collision event node.
Regarding claim 8, dependent on claim 1,
The Unity Documentation teaches:
The method of claim 1, further comprising: displaying real-time physics properties of objects in the 3D environment using a physics information node in the set of physics controller nodes; (
UD/API/UnityEngine/Classes/Input.GetAccelerationEvent:
PNG
media_image11.png
298
880
media_image11.png
Greyscale
UD/API/UnityEngine/Classes/Rigidbody:
PNG
media_image12.png
148
852
media_image12.png
Greyscale
Above we see several nodes which get us physics properties about objects such as their velocity. We see similar nodes exist for other physics properties like GetAccelerationEvent, where we can get live info about acceleration that occurred in the most recent frame. Here the UD teaches the application’s physics information node, as it contains a number of nodes which report information on physics like acceleration, velocity, etc….) displaying information about collision events in the 3D environment using a collision information node in the set of physics controller nodes (UD/API/Classes/Collision:
PNG
media_image13.png
780
1246
media_image13.png
Greyscale
Above we see the Collision node/object that is created whenever a collision occurs which provides us with all the vital information about the collision we would need to know like the object hit, the contact points, the total impulse applied in the contact, etc… This shows the UD’s information returned node Collision teaches the applications collision information node.); or displaying information about objects hit by a ray in the 3D environment using a ray hit information node in the set of physics controller nodes. (UD/API/Unity Engine/Classes/RaycastHit:
PNG
media_image8.png
770
1134
media_image8.png
Greyscale
Similarly to our collision class, the RaycastHit node/structure shown above provides us with relevant information about a ray hit such as distance, what was hit, texture of the collision surface, impact point on the object, surface that was hit, etc... This teaches the application’s ray hit information node.)
Regarding claim 9, dependent on claim 1,
The Unity Documentation teaches:
The method of claim 1, further comprising: implementing switches between local and world space references in applying forces, accelerations, and velocities to objects in the 3D environment using the set of physics controller nodes. (UD/API/UnityEngine/Classes/Transform:
PNG
media_image14.png
96
672
media_image14.png
Greyscale
PNG
media_image15.png
96
1042
media_image15.png
Greyscale
UD/API/UnityEngine/Classes/Rigidbody.AddForce:
PNG
media_image6.png
734
1522
media_image6.png
Greyscale
Here we see examples of various Transform and InverseTransform nodes/methods which allow for the switching between local and world space. In the re provided image of AddForce, which can be modified to control acceleration, velocity, and impulses, we see what provides the core “force” with direction is the vector we provide it. Meaning that since we can switch vectors between local and world space we can also “forces, accelerations, and velocities” as the application states.)
Regarding claim 10, dependent on claim 1,
The Unity Documentation teaches:
The method of claim 1, further comprising: implementing real-time visualizations of the physical simulations during creating the content.
(UD/UM/Physics/Built-in-3D Physics/Collision/Physics Debug Visualization:
PNG
media_image16.png
610
1518
media_image16.png
Greyscale
UD/API/UnityEngine/Classes/PhysicsVisualizationSettings:
PNG
media_image17.png
754
986
media_image17.png
Greyscale
In the image above we see the UD teach that physics simulation aspects like collisions, forces, impulses, inertia, center of mass, etc… can all be visualized with the Physics Debug Visualization setting enabled. The visualizations occur in the scene, UD/UM/Working in Unity/Create Gameplay/Scenes: “Scenes are where you work with content in Unity. They are assets that contain all or part of a game or application”. Here we learn scenes are a part of content creation in Unity, and viewable during the live content creation process. Since we can see physics information visualized in the scene, this teaches the ability to see real time physics information visualized during creating the content.
UD/API/UnityEngine/Classes/Gizmos:
PNG
media_image18.png
690
1288
media_image18.png
Greyscale
UD/API/UnityEngine/Classes/Gizmos.DrawLine:
PNG
media_image19.png
266
374
media_image19.png
Greyscale
Above we see the documentation for Gizmos, tools for visual debugging, that visualize info in the “Scene view”, the same live view we can see our collision info in if physics debug visualization is enabled. Under static methods we see physics related info like rays can be displayed using gizmos via nodes/methods like DrawRay. DrawLine will display vector from point to point, and as the different force types all come from AddForce, which uses a vector, all force types can be visualized with this Gizmo. The many options the UD presents for visualization of physics in the scene, which is where you “work with content in Unity” and see a live state of your content teach the application’s real-time visualization of physics simulations during the content creation.)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN GREGORY HAKALA whose telephone number is (571)272-7863. The examiner can normally be reached 8:00am-5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571)-270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALAN GREGORY HAKALA/Examiner, Art Unit 2617
/KING Y POON/Supervisory Patent Examiner, Art Unit 2617