DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 11/03/2023 & 5/22/2025 was filed before the first office action. The submissions are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Response to Arguments
Applicant's arguments filed 12/3/2025 have been fully considered but they are not persuasive.
Argument 1: The applicant alleges that McKegney does not teach a “fit component” or a “functional component”
The examiner respectfully disagrees. Instant par. 0026 describes functional components as any part or component or having a connection interface, such as McKegney’s browser or agent (McKegney, 0008), which are interfaces for connecting on the web. Then, the fit component, taught by Instant par. 0026 to be any remaining portion of the CAD model, such as McKegney’s 3D Rendering code (McKegney, 0008).
Argument 2: The applicant alleges that McKegney does not teach “providing at least an altered fit component to an additive manufacturing device”.
The examiner respectfully disagrees. Applicant's arguments do not comply with 37 CFR 1.111(c) because they do not clearly point out the patentable novelty which he or she thinks the claims present in view of the state of the art disclosed by the references cited or the objections made. Further, they do not show how the amendments avoid such references or objections. The specification is silent on the applicant’s intended use of “altered fit”. The examiner has cited McKegney 0028, as it calls for edited (altered fit) 3D scenes. The examiner is interpreting the manufacture, as rendering for display.
Argument 3: The applicant alleges that Franklin does not teach “wherein the model is a unitary model when obtained”.
The examiner respectfully disagrees. The citation of Franklin, 0031, specifically calls for a model to be fabricated from an “obtained” digital model.
Claim Rejection Notes
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claim(s) 1, 2, 4-5, 7-8, 10-13, and 17-18, 20, are rejected under 35 U.S.C. 102a2 as being anticipated by McKegney et al. (US 20170024112 A1, published: 1/26/2017).
Claim 1: McKegney teaches a method of modifying an object model in a web browser, the method comprising: at a client device:
defining a template page having a plurality of nodes (a cloud-based environment is configured to interface with storage devices that store 3D object models and a 3D rendering code base that is accessible by two or more user devices over a network using browsers or web agents. A web application is delivered to the user device, whereupon the browser or agent receives the 3D object model and the 3D rendering code into a web application that has user controls for defining at least one 3D scene that is rendered using user-defined camera shots and events associated with the 3D object model [McKegney, 0008]; Examiner's Note: wherein the template is being read on 3D object models, the nodes are the 2 or more devices over a network);
obtaining a model of a peripheral, wherein the model includes at least one fit component (3D rendering code [McKegney, 0008]) and at least one functional component (a web application is delivered to the user device, whereupon the browser or agent receives the 3D object model and the 3D rendering code into a web application that has user controls for defining at least one 3D scene that is rendered using user-defined camera shots and events associated with the 3D object model. A preview of the 3D scene is presented by rendering the shots and events in a web application or web agent [McKegney, 0008]; [McKegney, 0043-46]; Examiner's Note: wherein the browser or agent are being attributed to functional component since they have to do with components as per instant 0026), wherein the at least one fit component has at least one user-adjustable property that is adjustable without altering the at least one functional component (setting up a new scene can be initiated by selecting a 3D object model from the 3D assets 142 for the scene. As a specific example, the web designer 105.sub.1 might specify one or more shots in a scene by specifying certain attributes (e.g., camera type, camera position, camera orientation, etc.) of respective instances of the shot components 132. Further, the web designer 105.sub.1 might specify one or more events in a scene by specifying certain attributes (e.g., event listener, event trigger, event action, etc.) of respective instances of the event handler components 134 [McKegney, 0037]);
presenting to a user with a rendering engine of the template page, a rendering of the model in a first node of the plurality of nodes of the template page (as shown in FIG. 1, a web designer 105.sub.1 might want to include 3D content in a web application, but does not have the skills to write the low order 3D graphics rendering code to be interpreted by a browser displaying the web application [McKegney, 0035, FIG. 1]. The 3D scene browser interaction editing technique 100 can be implemented in a web application (e.g., visual editor 106.sub.1) operating in a browser 104.sub.1 or operating in a web agent that is accessible to the web designer 105.sub.1 (e.g., from a user device) [Mckegney, 0036]; [McKegney, FIG. 1]);
presenting to the user, in a second node of the plurality of nodes of the template page, at least one user interface (UI) input, wherein a UI value of the UI input is associated with a model value of at least one property of the at least one fit component of the model (user controls serve to invoke generation of executable shot components corresponding to one or more shots. User controls also serve to invoke generation of one or more event handler components that correspond to at least one event. A preview of the 3D scene is presented by rendering the shots and events in a web application or web agent [McKegney, 0008]. FIG. 2C1 shows the web agent or browser 204.sub.2 and web page 206 of FIG. 2A to illustrate a technique for delivering web-based interactive 3D scenes composed using a high order shots and high order events browser editor [McKegney, 0053, FIG. 2C1]);
receiving a user input that changes the UI value of the UI input through the second node (when the web pages associated with the web application have been built (see step 318) and the 3D embed code included, certain 3D interface controls in one or more web pages can be connected to the embedded 3D project (see step 320). Such connections can be enabled by web page calls to an API associated with the 3D runtime engine [McKegney, 0060]. The user interface view 4A00 exemplifies the web designer 105.sub.3 specifying a first camera shot for a 3D scene by dragging and dropping a 3D object 410 from the 3D object selection window 406 to the 3D scene setup window 402. The web designer 105.sub.3 can then select a camera (e.g., default camera “camera1”) for the first shot from certain cameras provided by the visual editor 106.sub.3. [McKegney, 0064]);
changing the model value of the at least one property of at least one fit component in response to the UI value (the 3D runtime engine 210.sub.1 operates the main update and render loops of the interactive 3D scene and processes web page events (e.g., button clicks, object picks, etc.) associated with the 3D scene. For example, the 3D runtime engine 210.sub.1 can process events that trigger object resize, focus, blur, etc. [McKegney, 0045]);
updating the rendering of the model in the first node of the template page at the client device based on the model value (the 3D runtime engine 210.sub.1 handles the loading, rendering, and execution of web-based interactive 3D scenes composed using a high order shots and high order events browser editor. The 3D runtime engine 210.sub.1 operates the main update and render loops of the interactive 3D scene and processes web page events (e.g., button clicks, object picks, etc.) associated with the 3D scene. For example, the 3D runtime engine 210.sub.1 can process events that trigger object resize, focus, blur, etc. [McKegney, 0045]); and
providing at least an altered fit component to an additive manufacturing device (composing web-based interactive 3D scenes using a high order shots and high order events browser editor [McKegney, 0028]. The 3D runtime engine 210.sub.1 can use the hardware detector 234 to help ensure an embedded 3D project in a specific browser operating on a specific device performs as expected [McKegney, 0047]).
Claims 13 and 18, having similar elements to claim 1, are likewise rejected.
Claim 2: McKegney teaches the method of claim 1. McKegney further teaches wherein the UI input includes a slider ([McKegney, FIG. 4A]; Examiner's Note: as illustrated, there are 3 sliders: FOV, Near Plane, and Far Plane).
Claim 4: McKegney teaches the method of claim 1. McKegney further teaches further comprising selecting the at least one fit component in the first node, and changing the UI input of the second node based at least partially on the at least one fit component selected ([Matjasko, FIGs. 4]).
Claim 5: McKegney teaches the method of claim 1. McKegney further teaches wherein the at least one property of the fit component includes a dimension of the fit component (the 3D runtime engine 210.sub.1 can further comprise a component library 236, according to one or more embodiments. As shown, the component library 236 can comprise the shot components 132 and the event handler components 134. Each component in the component library can listen to predefined events (e.g., specified by the web designer 105.sub.1 in the visual editor 106.sub.1), such as frame updates, window resizing, viewer input, etc. The component library 236 can further include built in components (e.g., a rotate component, an explode component, a translate/rotate/scale component, an animate component, etc.) to provide common functionality without demanding that the web designer write any code [McKegney, 0048]).
Claim 7: McKegney teaches the method of claim 1. McKegney further teaches wherein the rendering component includes at least one of a pan component, a rotation component, and a zoom component (an animated translation of the second shot camera (e.g., zoom out) is desired [McKegney, 0069]; [McKegney, FIGs. 4A-C]; Examiner's Note: as illustrated over FIGs. 4A-C, 3 different positions of the camera rotated around the object 410).
Claim 8 McKegney teaches the method of claim 7. McKegney further teaches the plurality of nodes further comprising a third node including a view control, wherein a view input to the view control causes the rendering component to execute at least one of panning, rotating, or zooming the rendering of the model in the first node at the client device ([McKegney, FIGs. 4A-C]; Examiner's Note: as illustrated over FIGs. 4A-C, 3 different positions of the camera rotated around the object 410).
Claim 10: McKegney teaches the method of claim 1. McKegney further teaches wherein obtaining the model of a peripheral includes obtaining a package that exports one or more libraries to the client device ([McKegney, 0037]).
Claim 11: McKegney teaches the method of claim 10. McKegney further teaches wherein changing the model value of the at least one property of at least one fit component in response to the UI value includes the library detecting a change in the UI value (each component in the component library can listen to predefined events (e.g., specified by the web designer 105.sub.1 in the visual editor 106.sub.1), such as frame updates, window resizing, viewer input, etc. [McKegney, 0048]).
Claim 12: McKegney teaches the method of claim 1. McKegney further teaches wherein updating the rendering of the model is based at least partially on an event handler detecting the user input to the at least one UI input (each component in the component library can listen to predefined events (e.g., specified by the web designer 105.sub.1 in the visual editor 106.sub.1), such as frame updates, window resizing, viewer input, etc. [McKegney, 0048]).
Claim 17: McKegney teaches the method of claim 13. McKegney further teaches wherein the peripheral is an adaptive electronic device peripheral (some environments can include peripherals such as HMD displays (e.g., Oculus, GearVR, Cardboard, etc.), depth-sensing or environment-sensing input devices (e.g., Kinect, Leap motion, touch displays, Internet of things (IoT) devices, etc.) [McKegney, 0080]).
Claim 20: McKegney teaches the system of claim 18. McKegney further teaches wherein an event handler detects the user input that changes the UI value and the event handler changes the model value (user controls serve to invoke generation of executable shot components corresponding to one or more shots. User controls also serve to invoke generation of one or more event handler components that correspond to at least one event. A preview of the 3D scene is presented by rendering the shots and events in a web application or web agent [McKegney, 0008]).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over McKegney et al. (US 20170024112 A1, published: 1/26/2017), in view of Matjasko et al. (US 20160140754 A1, published: 5/19/2016).
Claim 3: McKegney teaches the method of claim 1. McKegney does not teach wherein the rendering of the model is updated in real-time.
However, Matjasko teaches wherein the rendering of the model is updated in real-time (as a user manipulates the virtual objects 105, the rendering may be updated in real-time (or substantially in real-time) so as to realistically and fluidly display the virtual object in simulated environmental conditions [Matjasko, 0047]).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the graphical rendering manufacturing invention of McKegney to include the real-time update to rendering feature of Matjasko.
One would have been motivated to make this modification to immediately, in real-time, render a model. No one wants to wait for something later, that can be done now.
Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over McKegney et al. (US 20170024112 A1, published: 1/26/2017), in view of Goh (US 20140315640 A1, published: 10/23/2014).
Claim 6: McKegney teaches the method of claim 1. McKegney does not teach wherein the at least one property of the fit component includes a material property of the fit component.
However, Goh teaches wherein the at least one property of the fit component includes a material property of the fit component (the shell may include a comparatively more flexible material at gripping positions on the shell handle portions. Alternatively the shell may be designed to allow soft or flexible patches to be affixed on it, for example by providing recesses and clip-on points for such patches. The position of such patches or flexible materials may be responsive to measurements of the user's hands in a similar manner to the other parameters described previously herein [Goh, 0078]).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the graphical rendering manufacturing invention of McKegney to include the material rendered property feature of Goh.
One would have been motivated to make this modification to include a physical manufactured item of what had been graphically modeled.
Claims 9 and 19, are rejected under 35 U.S.C. 103 as being unpatentable over McKegney et al. (US 20170024112 A1, published: 1/26/2017), in view of Pincus et al. (US 20120098742 A1, published: 4/26/2012).
Claim 9: McKegney teaches the method of claim 1. McKegney does not teach further comprising sending an altered CAD model to the additive manufacturing device.
However, Pincus teaches further comprising sending an altered CAD model to the additive manufacturing device (manufacturing device uses data to produce device body, the data may be supplied to an appropriate device that may, for instance, be a computer controlled machine such as, but not limited to, a CNC miller, a CNC lathe, a 3D printing machine or some combination thereof [Pincus, 0112, FIG. 15]).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the graphical rendering manufacturing invention of McKegney to include the altered manufactured feature of Pincus.
One would have been motivated to make this modification to make alterations before manufacturing the modeled product. Such would allow improvements or corrections to be made before fabrication.
Claim 19, having similar elements to claim 9, is likewise rejected.
Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over McKegney et al. (US 20170024112 A1, published: 1/26/2017), in view of Franklin et al. (US 20150025548 A1, published: 1/22/2015).
Claim 14: McKegney teaches the method of claim 13. McKegney does not teach wherein the model is a unitary model when obtained.
However Franklin teaches wherein the model is a unitary model when obtained (from this digital model, a solid physical model of the unitary positioning interface is fabricated [Franklin, 0031]. A claim containing a “recitation with respect to the manner in which a claimed apparatus is intended to be employed does not differentiate the claimed apparatus from a prior art apparatus” if the prior art apparatus teaches all the structural limitations of the claim [MPEP 2114, II]).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the graphical rendering manufacturing invention of McKegney to include the unitary model feature of Franklin.
One would have been motivated to make this modification in accordance with unitary model standards.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over McKegney et al. (US 20170024112 A1, published: 1/26/2017), in view of Sikka et al. (US 20210081841 A1, published: 3/18/2021).
Claim 15: McKegney teaches the method of claim 13. McKegney does not teach wherein identifying one or more functional components includes inputting the model into a machine learning (ML) system.
However, Sikka teaches wherein identifying one or more functional components includes inputting the model into a machine learning (ML) system (AI design application 120 generates 602 a user interface (e.g., GUI 124) that includes one or more components for visually generating a machine learning model. For example, AI design application 120 renders, within GUI 124, graphical objects representing neurons, layers, layer types, connections, activation functions, inputs, outputs, and/or other components of a neural network [Sikka, 0091]).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the graphical rendering manufacturing invention of McKegney to include the connection interface feature of Sikka.
One would have been motivated to make this modification to view the network connections on a displayed user interface, in order to see network status, who is available for collaboration, etc.
Claim 15 is rejected under 35 U.S.C. 103 as being unpatentable over McKegney et al. (US 20170024112 A1, published: 1/26/2017) and Sikka et al. (US 20210081841 A1, published: 3/18/2021), and in further view of Yu et al. (US 20220180052 A1, published: 6/9/2022).
Claim 16: The combination of McKegney and Sikka, teaches the method of claim 15. The combination of McKegney and Sikka, does not teach wherein the ML system identifies at least one connection interface.
However, Yu teaches wherein the ML system identifies at least one connection interface (a collaborative communication application or service, that is used to establish an electronic meeting, may detect components (e.g., webcam and microphone) of a user computing device 102 and automatically establish a connection thereto to enable a live camera feed to be presented for the electronic meeting [Yu, 0030]. A claim containing a “recitation with respect to the manner in which a claimed apparatus is intended to be employed does not differentiate the claimed apparatus from a prior art apparatus” if the prior art apparatus teaches all the structural limitations of the claim [MPEP 2114, II]).
Therefore, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention, to modify the graphical rendering manufacturing invention of McKegney to include the ML system feature of Yu.
One would have been motivated to make this modification in accordance with ML system standards.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SETH A SILVERMAN whose telephone number is (571)272-9783. The examiner can normally be reached Mon-Thur, 8AM-4PM MST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571)272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Seth A Silverman/Primary Examiner, Art Unit 2172