DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Remarks
This office action is responsive to the amendment filed on 08/05/2025.
Claim(s) 1, 3-8, 10-15, 17-23 is/are pending in the application.
Independent claim(s) 1, 8, 15 was/were amended.
Claim(s) 21-23 was/were added.
Claim(s) 2, 9, 16 was/were canceled.
Response to Arguments
Applicant's argument(s), regarding the amended portion(s) as recited in independent claim 1 (and similarly in independent claim(s) 8, 15), filed 08/05/2025, have/has been fully considered and is/are persuasive. However, upon further consideration, a new ground(s) of rejection is made, adding/using Baran to be relied upon for the aforementioned amended portion(s). To note, applicant's amendment necessitated the new ground(s) of rejection presented in this office action.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
Claim(s) 1-3, 5, 8-10, 12, 15-17, 19, 21-23 is/are rejected under 35 U.S.C. 103 as being unpatentable over De Biswas (US 2013/0144566 A1) in view of Wright, Jr. et al. (US 2019/0370544 A1) and Baran (US 2020/0004894 A1).
In regards to claim 1, De Biswas teaches a system comprising:
one or more processors (e.g. [0197]: deployed in part or in whole through a machine that executes computer software, program codes, and/or instructions on a processor); and
logic encoded in one or more non-transitory computer-readable storage media for execution by the one or more processors and when executed operable to cause the one or more processors to perform operations (e.g. further in [0197]: processor may include memory that stores methods, codes, instructions and programs) comprising:
activating an expert user chat in an expert user interface (e.g. [0025]: provides a system that may include a collaborative 3D model space configured for collaborative design by a plurality of users distributed over a network; components accessible in the 3D model space may be concurrently accessible to the plurality of users; system may further include an interface port of the 3D model space for providing access to at least one component so that each of the plurality of users can individually manipulate a three-dimensional view of the at least one component on disparate client computers; [0121]: the design platform 100 enables the collaborative design by users via the user interfaces 600; see also [0136]: users may also send messages/notifications and may share projects with users to which they are connected, such as through a social-type network; [0194]: these same 3D on-line collaborative modeling methods and systems may facilitate expert collaboration workflows of a 3D model space; guide a user to select and interact with a subject matter expert; Examiner’s note: this shows that a user may be an Expert);
activating a second user chat in a second user interface (e.g. as above, [0121]: the design platform 100 enables the collaborative design by users via the user interfaces 600; [0136]: users may also send messages/notifications and may share projects with users to which they are connected, such as through a social-type network);
rendering a three-dimensional (3D) model in the expert user interface and in the second user interface (e.g. [0119],Fig.6: shared 3D modeling space 400; there is illustrated a user interface 600a and a second user interface 600b appearing on the user devices 202a, 202b, respectively; in the present example, users of the user interfaces 202a, 202b are collaborating on the design of the sub-space 404b of Fig.4 comprising sub-component 7; Examiner’s note: this shows rendering of the 3D model onto the plurality of users’ interfaces);
rendering controls in the expert user interface that enable an expert user to manipulate the 3D model in the expert user interface (e.g. as above, [0025]: providing access to at least one component so that each of the plurality of users can individually manipulate a three-dimensional view of the at least one component; [0121]: the design platform 100 enables the collaborative design by users via the user interfaces 600; see also [0110]: the user interface 500 operates to coordinate the display of rendering data with third party modeling software; may be integrated with third-party modeling software programs so that the rendering and 3D/2D manipulation capabilities of the third-party software may be used to create and/or modify a sub-component that has been accessed from the 3D model space; Examiner’s note: this suggests controls would be rendered for model manipulation); and
rendering controls in the second user interface that enable a second user to manipulate the 3D model in the second user interface (e.g. as above, [0025]: plurality of users can individually manipulate a three-dimensional view; [0121]: collaborative design by users via the user interfaces 600; [0110]: the rendering and 3D/2D manipulation capabilities of the third-party software may be used to create and/or modify a sub-component; Examiner’s note: this suggests controls would be rendered for model manipulation),
wherein the controls in the expert user interface and the controls in the customer user interface enable the expert user and the customer user to manipulate the 3D model simultaneously (e.g. as above, [0025]: plurality of users can individually manipulate a three-dimensional view; see also [0130]: API may enable concurrent design, multiple analysis and modeling processes, plug and play modules, and the like by more than one user on one or more client devices 202)
but does not explicitly teach a system,
wherein the second user is a customer,
wherein the 3D model is of a vehicle, and
wherein the 3D model as manipulated is simultaneously rendered in the expert user interface and in the customer user interface.
However, Wright, Jr. teaches a system,
wherein the second user is a customer (e.g. [0005]: initiating communication between a user of a device and a remote individual or party for assisting the user in interacting with an object; [0030]: shared AR system platform supports an augmented shared visual space for live mobile remote collaboration; [0053],Fig.6D: once the call to support has been established, as shown in Fig.6D, the user may be given the option to share the user's view, thereby transitioning the call to an AR communication session; Examiner’s note: user may be viewed as a customer as they are calling for support), and
wherein the 3D model is of a vehicle (e.g. [0032]: for example, the user may be performing maintenance on an engine of a vehicle and may encounter object(s) within the engine having issues that the user is unable to diagnose and/or address; [0136]: 3D representation of the object may be communicated as data and may be reconstructed by the expert device; Examiner’s note: this shows that that 3D representation may be of a vehicle).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings/combination of De Biswas to collaborate with customer support, in the same conventional manner as taught by Wright, Jr. as both deal with remote collaboration. The motivation to combine the two would be that it would allow customers to remotely communicate with support for objects, such as vehicles.
Further, Baran teaches a system,
wherein the 3D model as manipulated is simultaneously rendered in a first user interface and in a second user interface (e.g. Abstract: several CAD users, each using their own computer, phone, or tablet, can edit the same 3D Model at the same time; editing may be separate and simultaneous; as a result, users see each other's changes occur in real-time).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings/combination of De Biswas and Wright, Jr. to edit 3D models simultaneously, in the same conventional manner as taught by Baran as both deal with collaborative display with a 3D model. The motivation to combine the two would be that it would allow manipulations made by each user to be rendered and viewed by all users in real time.
In regards to medium claim 8 and method claim 15, claim(s) 8, 15 recite(s) limitations that is/are similar in scope to the limitations recited in claim 1. Therefore, claim(s) 8, 15 is/are subject to rejections under the same rationale as applied hereinabove for claim 1.
In regards to claim 3, the combination of De Biswas, Wright, Jr. and Baran teaches a system, wherein the controls in the expert user interface and the controls in the customer user interface enable the expert user and the customer user to manipulate objects of the 3D model (e.g. De Biswas, as above, [0025]: plurality of users can individually manipulate a three-dimensional view; [0110]: the rendering and 3D/2D manipulation capabilities of the third-party software may be used to create and/or modify a sub-component), and wherein the objects represent physical components of the vehicle (e.g. Wright, Jr. as above, [0032]: for example, the user may be performing maintenance on an engine of a vehicle and may encounter object(s) within the engine having issues that the user is unable to diagnose and/or address).
In addition, the same rationale/motivation of claim 1 is used for claim 3.
In regards to medium claim 10 and method claim 17, claim(s) 10, 17 recite(s) limitations that is/are similar in scope to the limitations recited in claim 3. Therefore, claim(s) 10, 17 is/are subject to rejections under the same rationale as applied hereinabove for claim 3.
In regards to claim 5, the combination of De Biswas, Wright, Jr. and Baran teaches a system, wherein the logic when executed is further operable to cause the one or more processors to perform operations comprising:
enabling the expert user and the customer user to converse with each other via the expert user chat and customer user chat (e.g. De Biswas as above, [0136]: users may also send messages/notifications and may share projects with users to which they are connected, such as through a social-type network; [0194]: these same 3D on-line collaborative modeling methods and systems may facilitate expert collaboration workflows of a 3D model space; guide a user to select and interact with a subject matter expert; Examiner’s note: this suggests that users may chat/interact with other users/experts; Wright, Jr. relied upon for the other user being a customer);
enabling the expert user and the customer user to configure the vehicle using the controls in the expert user interface and the controls in the customer user interface (e.g. De Biswas as above, [0121]: the design platform 100 enables the collaborative design by users via the user interfaces 600; [0110]: rendering and 3D/2D manipulation capabilities of the third-party software may be used to create and/or modify a sub-component that has been accessed from the 3D model space; [0194]: these same 3D on-line collaborative modeling methods and systems may facilitate expert collaboration workflows of a 3D model space; Examiner’s note: this suggests that users/experts may configure the object; Wright, Jr. relied upon for the object being a vehicle); and
storing vehicle configuration data associated with the vehicle in a database of vehicle data (e.g. De Biswas, [0110]: a user interface may include a first portion that represents information, including graphic or hierarchical information about one or more sub-components in the 3D model space, and a second portion for editing the one or more sub-components with a model editing software that is separate from the 3D model space, wherein the user can control downloading of the one or more sub-components from the 3D model space for editing with the model editing software and uploading of a new version of the one or more sub-components from the model editing software to the 3D model space; Examiner’s note: this shows that edits to the 3D model are uploaded/saved; Wright, Jr. relied upon for the object being a vehicle).
In regards to medium claim 12 and method claim 19, claim(s) 12, 19 recite(s) limitations that is/are similar in scope to the limitations recited in claim 5. Therefore, claim(s) 12, 19 is/are subject to rejections under the same rationale as applied hereinabove for claim 5.
In regards to claim 21, the combination of De Biswas, Wright, Jr. and Baran teaches the system, wherein the 3D model as manipulated is simultaneously rendered in the expert user interface and in the customer user interface immediately when the controls in the expert user interface and the controls in the customer user interface are used by the expert user and the customer user to manipulate the 3D model (e.g. Baran as above, Abstract: several CAD users, each using their own computer, phone, or tablet, can edit the same 3D Model at the same time; editing may be separate and simultaneous; as a result, users see each other's changes occur in real-time).
In addition, the same rationale/motivation of claim 1 is used for claim 21.
In regards to medium claim 22 and method claim 23, claim(s) 22, 23 recite(s) limitations that is/are similar in scope to the limitations recited in claim 21. Therefore, claim(s) 22, 23 is/are subject to rejections under the same rationale as applied hereinabove for claim 21.
Claim(s) 4, 11, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of De Biswas, Wright, Jr. and Baran as applied to claims 1, 8, 15 above, and further in view of Vats et al. (US 2015/0220244 A1).
In regards to claim 4, the combination of De Biswas, Wright, Jr. and Baran teaches a system, wherein the controls in the expert user interface and the controls in the customer user interface enable the expert user and the customer user to manipulate objects of the 3D model (e.g. De Biswas, as above, [0025]: plurality of users can individually manipulate a three-dimensional view; [0110]: the rendering and 3D/2D manipulation capabilities of the third-party software may be used to create and/or modify a sub-component), but does not explicitly teach the system, wherein the objects represent electronic operations of the vehicle.
However, Vats teaches a system, wherein the objects represent electronic operations of the vehicle (e.g. [0094],Fig.12b: user can understand functionality of the music system 1202, by operating the music system in the displayed 3D model using user-controlled interaction unit 131 in association with a virtual operating sub-system (VOS) of the user-controlled interaction sub-system 133 of the electronic panel system 100; user on pressing an ON-button on the music system 1202, the music system 1202 starts and displays software features on a display 1220 of the music system 1202 as shown in illustration (b) of Fig.12b replicating a real scenario as in physical cars music system).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings/combination of De Biswas, Wright, Jr. and Baran to interact with and simulate the 3D model, in the same conventional manner as taught by Vats as both deal with displaying 3D models of objects, such as vehicles. The motivation to combine the two would be that it would allow the user to simulate operations of a vehicle using the displayed 3D model, such as electronic operations.
In regards to medium claim 11 and method claim 18, claim(s) 11, 18 recite(s) limitations that is/are similar in scope to the limitations recited in claim 4. Therefore, claim(s) 11, 18 is/are subject to rejections under the same rationale as applied hereinabove for claim 4.
Claim(s) 6, 13, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over the combination of De Biswas, Wright, Jr. and Baran as applied to claims 1, 8 above, and further in view of Brown et al. (US 2010/0223158 A1).
In regards to claim 6, the combination of De Biswas, Wright, Jr. and Baran teaches the system of claim 1, but does not explicitly teach the system, wherein the logic when executed is further operable to cause the one or more processors to perform operations comprising:
accessing a database of vehicle data; and
fetching, from the database, vehicle specifications of the vehicle, wherein the vehicle specifications comprise features previously selected by the customer user.
However, Brown teaches a system, comprising:
accessing a database of vehicle data (e.g. [0079],Fig.7: after accessing the website, the customer initiates a vehicle build as set forth in block 250; the customer makes relevant selections related to the vehicle to be built (block 252); examples of relevant selections include vehicle model and year, packages, options, vehicle color, and the like; the customer may save the information related to the vehicle assembled thus far (block 260); if the customer decides to save the information, an identifying name or description for the vehicle configuration is solicited and received from the customer (block 276); this name/description is then saved by the server (block 276); Examiner’s note: this suggests use accesses server to build vehicle, then saves specification to server); and
fetching, from the database, vehicle specifications of the vehicle, wherein the vehicle specifications comprise features previously selected by the customer user (e.g. as above, [0079],Fig.7: name/description is then saved by the server (block 276); see also [0083],Fig.11: system determines if there are any saved vehicle configurations (block 374); if there are saved configurations, the system prompts the customer to select one of the saved configurations (block 376); Examiner’s note: this suggests that the vehicle specification, saved to the server, can then be retrieved).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings/combination of De Biswas, Wright, Jr. and Baran to save vehicle configuration, in the same conventional manner as taught by Brown as both deal with vehicle manipulation/specification. The motivation to combine the two would be that it would allow the user to save user-specified data about an object, such as a vehicle, and retrieve for later use.
In regards to medium claim 13 and method claim 20, claim(s) 13, 20 recite(s) limitations that is/are similar in scope to the limitations recited in claim 6. Therefore, claim(s) 13, 20 is/are subject to rejections under the same rationale as applied hereinabove for claim 6.
Allowable Subject Matter
Claim(s) 7, 14 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Claim(s) 7, 14 was/were carefully reviewed and a search with regards to independent claim(s) 1, 8 has been made. Accordingly, those claim(s) are believed to be distinct from the prior art searched.
Regarding claim(s) 7, 14 (and specifically independent claim(s) 1, 8), the prior art search was found to neither anticipate nor suggest a system/medium, comprising: accessing a database of vehicle data; fetching, from the database, vehicle specifications of the vehicle, wherein the vehicle had been purchased by the customer user; and enabling the expert user to conduct a vehicle orientation for the customer user (emphasis added).
It is viewed that any of the previously cited references or any of the prior art searched, in part or in whole, cannot be combined in such a way to render the claimed invention obvious.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JED-JUSTIN IMPERIAL whose telephone number is (571)270-5807. The examiner can normally be reached Monday to Friday, 9am - 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JED-JUSTIN IMPERIAL/Examiner, Art Unit 2616
/DANIEL F HAJNIK/Supervisory Patent Examiner, Art Unit 2616