Prosecution Insights
Last updated: April 19, 2026
Application No. 16/350,071

REALISTIC GUI BASED INTERACTIONS WITH VIRTUAL GUI OF VIRTUAL 3D OBJECTS

Non-Final OA §103§112
Filed
Aug 22, 2018
Examiner
BADAWI, ANGIE M
Art Unit
2179
Tech Center
2100 — Computer Architecture & Software
Assignee
Try And Buy Fashion Design Private Limited
OA Round
9 (Non-Final)
59%
Grant Probability
Moderate
9-10
OA Rounds
4y 1m
To Grant
97%
With Interview

Examiner Intelligence

Grants 59% of resolved cases
59%
Career Allow Rate
168 granted / 285 resolved
+3.9% vs TC avg
Strong +38% interview lift
Without
With
+38.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 1m
Avg Prosecution
17 currently pending
Career history
302
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
48.5%
+8.5% vs TC avg
§102
15.5%
-24.5% vs TC avg
§112
22.7%
-17.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 285 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION The Amendment filed on 11/10/2025 has been received and entered. Application No. 16/350,071 Claims 1-15 are now pending. Claim 1 has been amended. Continued Examination under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/10/2025 has been entered. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-15 rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. As per Claim 1, claim recites “wherein the 3D model is rendered as a textured mesh object in the 3D graphics environment, and ”. Examiner reviewed the specification and did not find support for the newly added limitation. As per Claims 2-15, the claims are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement due to their dependency on claim 1. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-15 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. A broad range or limitation together with a narrow range or limitation that falls within the broad range or limitation (in the same claim) may be considered indefinite if the resulting claim does not clearly set forth the metes and bounds of the patent protection desired. See MPEP § 2173.05(c). In the present instance, claim 1 recites the broad recitation “whereas the virtual interactive display is in any orientation or perspective in synchronization with the 3d model,”, and the claim also recites “wherein the virtual interactive display surface is in any orientation or perspective in synchronization with the 3D model geometry,” which is the narrower statement of the range/limitation. The claim(s) are considered indefinite because there is a question or doubt as to whether the feature introduced by such narrower language is (a) merely exemplary of the remainder of the claim, and therefore not required, or (b) a required feature of the claims. Appropriate distinction and/or amendment is required. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8 & 10-14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ames et al. (U.S. Pub 2015/0082181) hereinafter Ames, in view of Jovanovic (U.S. Pub 2015/0332511) hereinafter Jovan, in view of YI et al. (U.S. Pub 2017/0357406) hereinafter Yi, in view of FIALKOW (U.S. Pub 2019/0243882) hereinafter Fial. As per Claim 1, Ames teaches A method for realistically interacting with a 3D model of an object in 3D computer graphics environment, wherein the displayed 3D model is capable of performing user controlled interaction and having at least one virtual interactive display is adapted to display a graphical user interface similar an to interactive display of the object, the method comprising: receiving an input for interaction on 3D model; if the interaction input is provided in a region of virtual interactive display, then the interaction input is applied to the graphical user interface of this virtual interactive display only, (Fig. 3A-3B, Fig. 8a ¶22, ¶35, 45 wherein enabling three-dimensional (3D) display and interaction with interfaces (such as a webpage, a content page, an application, etc.) when the device is operating in a 3D view mode wherein the user can make a motion or gesture in the field of view of the device that can cause the device to alter an appearance of the displayed interface elements wherein interface elements can appear to be positioned and/or displayed in 3D space such that that certain interface elements (e.g., text, images, etc.) become larger in 3D depth and/or appear closer to a surface of a display screen of the computing device, while other interface elements (e.g., advertisements) "fall back" or appear smaller in 3D depth. As the user tilts, rotates, or otherwise changes the orientation of the device, or performs a gesture (e.g., waves at the device) or touch input, the interface elements can move back and forth or otherwise change shape or appearance wherein a display screen 802 of a computing device 800 can display at least one interface object 808 on an interface 801. In this example, the interface object is an image of an elephant) else, user controlled interaction is applied on 3d model or its parts; (Fig. 7A, Fig. 7B, ¶44 wherien the 3D view mode can be activated by tilting the device wherein the user can interact with the pages, such as through touch input or gesture input to cause the device to navigate to the selected page. For example, the user can "tap" of otherwise select the desired page, and upon selecting the page the device can cause the respective page to load) processing the interaction input and producing: corresponding change in multimedia on virtual interactive display, or performing user controlled interaction in 3d model or its part/s or change in multimedia on virtual interactive display, or combination thereof, (¶35 wherien the device can adjust the appearance of shadows associated with the relevant interface elements to make those elements appear to be higher in the interface, as well as to give a 3D appearance as each shadow can move in position relative to an associated element as the point of view changes. Further, the interface can render sidewalls or other elements that appear to provide a depth of the interface element from the point of view of the user, and the extent and shape of these elements can adjust with changes in point of view, as well as an orientation of the device. Various other behaviors can be used as well to mimic 3D behavior as well as an appearance of stacked interface elements) wherein user controlled interaction comprises interacting with at least the 3D model as a whole or its part/s other than the virtual interactive display, to perform any change in the 3D model as a whole or its part/s, or a view of the 3D model representing output of the interaction. (Fig. 4A, Fig. 4B, ¶38 wherein as a user of the computing device tilts, rotates, translates, flicks, or otherwise changes a relative orientation of the device, the display of the content can be adjusted to provide a view of a different one of the walls. For example, when the user rotates the device counterclockwise 432 around an axis 430 of the device, the rotation of the device can cause the content displayed to shift accordingly) Ames previously taught the 3d model, interaction with the virtual interactive display. However, Ames does not explicitly teach and disabling of further receiving of interaction input on the 3d model, or its part/s while this interaction with the virtual interactive display, while this interaction is being carried out, whereas the virtual interactive display is in any orientation or perspective in synchronization with the 3d model; Jovan teaches having at least one virtual interactive display is adapted to display a graphical user interface similar an to interactive display of the object, (Fig. 4A-F, ¶36, ¶57 wherein FIGS. 4A, 4B, 4C, 4D, 4E, and 4F are representative examples of moving a 3D object and projecting a projection of the 3D object in the modeled 2D environment including an interactive catalog wherien the moving module 216 may be configured to receive an object spinning request for rotational movement of the 3D object imported on to the 2D environment. The spinning request thus received is passed on to the spinning module 218, which allows spinning or any such rotational movement of the 3D object in the 2D environment. For example, the 3D object inserted onto the 2D environment might be a chair or triangular table, and the user may prefer to precisely orient the chair seat in a particular direction or in case of the triangular table, the user may prefer the three corners of the table oriented in a certain preferred directions ) and disabling of further receiving of interaction input on the 3d model or its part/s other than the virtual interactive display, while this interaction with the virtual interactive display is being carried out, whereas the virtual interactive display is in any orientation or perspective in synchronization with the 3d model; (Fig. 7A, ¶84, ¶85 wherein The "Live Mode" button allows the user 120 to switch between edit mode (where objects may be moved, edited and so forth) and a "live" mode where the end result is displayed wherein The third virtual button 656, which is labeled "Add Products," may be selected by the user 120 to add 3D objects) It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of interactive catalog for 3d objects of Jovan with the teaching of three-dimensional object display of Ames because Jovan teaches providing an interactive catalog associated with the 3D model of the object while positioning the 3D model of the object onto the 2D environment wherein walls may be selectively positioned within the image. Further, in some examples, a 3D object may then be positioned within the 2D image with perspective and scale overlay, combined image 262 wherien the improvements herein is that the 3D object may be realistically positioned within the resulting image 264 based on the perspective and scale overlay information. Further, the 3D object may be positioned within resulting image 264 such that the 3D object may be perceived in three dimensions within the 2D environment. (Abstract, ¶44) Ames as modified does not explicitly teach wherien the 3D model is rendered as a textured mesh object in the 3D graphics environment, and Yi teaches wherien the 3D model is rendered as a textured mesh object in the 3D graphics environment, and (Fig. 10, Fig. 11, Fig. 13, ¶95, ¶96, ¶97 wherien he 3D volume information of FIG. 10 and the 3D mesh information of FIG. 11 are displayed together in a single 3D space in the state of being overlaid on each other while maintaining the depth information of each of them wherein a user menu adapted to enable 3D mesh information to be modified may be provided) It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of medical image display system providing user interface enabling three-dimensional mesh to be edited of Yi with the teaching of three-dimensional object display of Ames as modified because Yi teaches providing an improved user-friendly user interface which can reduce the time required for the optimization of a 3D mesh and which enables a 3D mesh to be edited in a single display environment in an integrated manner wherein the display information is 3D display information represented in the 3D space, and may represent depth information in the 3D space. The display information may be generated by overlaying the 3D mesh information on the 3D volume information in the 3D space including the 3D volume information. The user menu is provided such that the user can edit the 3D mesh information in the 3D space. (¶11, ¶16) Ames as modified previously taught wherein the virtual interactive display surface is in any orientation or perspective in synchronization with the 3D model geometry, and. However, Ames as modified does not explicitly teach input events directed to the virtual interactive display surface are processed by a distinct interface handler separate from a 3D model interaction handler Fial teaches input events directed to the virtual interactive display surface are processed by a distinct interface handler separate from a 3D model interaction handler (Fig. 6, ¶85 wherein FIG. 6 which illustrates the elements of EOH system 80 and WBS editor 30 and their relationships wherien WBS editor 30 may further comprise an input method handler 31 and a component manipulation handler 32. Component manipulation handler 32 may further comprise a transparency handler 321, a z-order handler 322, a general behavior handler 323, a 3D display handler 324 and a side display handler 325. It will be appreciated that the sub elements of manipulation handler 32 may be responsible for modifications involving specific types of components attributes) It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of handling overlapping objects in visual editing systems of Fial with the teaching of three-dimensional object display of Ames as modified because Fial teaches an improved a website building system; the system includes a visual editor to support user editing of a website page of the website building system; the page having regular components and overlapped and hidden components; and an editor overlap handler to determine display instructions for the visual editor for the overlapped and hidden components according to activation conditions, the activation conditions based on a user selected point on the page, activation rules and information on components of the page, the information comprising component proximity to or component interaction with the user selected point, z-order and at least one of: general relationships between the components on the page, information on the user and information on the system. (¶34) As per Claim 2, the rejection of claim 1 is hereby incorporated by reference; Ames as modified further teaches comprising: receiving another interaction input to interaction with 3D model or its other part/s, other than the virtual interactive display; processing the another interaction input; (Fig. 4C, ¶64 wherein the user may use the finger icon 350 or other suitable indicator to select the display 324 and display a menu bar 360. The menu bar 360 may include several options displayed by virtual icon buttons. For example, a virtual icon button 362 may be selected by the user to "UNDO" an action performed, such as selection of a 3D object from the interactive catalog 380. Another virtual icon button 364 may be selected by the user to move the selected 3D object, the display 324; as taught by Jovan) displaying corresponding user controlled interaction onto the 3D model or its part/s, or -multimedia change on the graphical user interface being displayed onto the virtual interactive display along with the user controlled interaction onto the 3D model or its parts. (Fig. 4C, ¶64, ¶65 wherein Another virtual icon button 364 may be selected by the user to move the selected 3D object, the display 324 or A further virtual icon button 366 may be selected by the user to "SPIN" or rotate the selected 3D object along an axis passing through the selected 3D object; as taught by Jovan) As per Claim 3, the rejection of claim 1 is hereby incorporated by reference; Ames as modified further teaches wherein the user controlled interaction of the 3D model or 3D model part/s comprises at least one of extrusive interaction for interacting with exterior region or parts of the 3D model, intrusive interactions for interacting with internal parts or interior region of the 3D model, a time bound change based interaction, or a real environment mapping based interaction, or combination thereof, wherein the time bound changes refers to representation of changes in the 3D model demonstrating change in physical property of object in a span of time on using or operating of the object, and real environment mapping refers to capturing a real time environment, mapping and simulating the real time environment to create a simulated environment for interacting with the 3D model. (¶35 wherien the device can adjust the appearance of shadows associated with the relevant interface elements to make those elements appear to be higher in the interface, as well as to give a 3D appearance as each shadow can move in position relative to an associated element as the point of view changes. Further, the interface can render sidewalls or other elements that appear to provide a depth of the interface element from the point of view of the user, and the extent and shape of these elements can adjust with changes in point of view, as well as an orientation of the device. Various other behaviors can be used as well to mimic 3D behavior as well as an appearance of stacked interface elements; as taught by Ames) As per Claim 4, the rejection of claim 1 is hereby incorporated by reference; Ames as modified further teaches wherein the 3D model comprises a lighting part, and the interaction input on the 3D model results in the user controlled interaction for showing lighting effect onto the lighting part of the 3D model. (¶55 wherein the visualizing module 210 may further help the user 120 to alter view settings such as brightness or contrast of the imported 2D environment. (Altering the brightness or contrast of the 2D environment may allow the user to visualize the positioning of the 3D object in the 2D environment under more light or less light situations wherein the user may be able to visualize and appreciate how the 3D object superimposed on the 2D environment may look during day time versus night time conditions, or conditions of bright lighting or dim lighting where a lamp or light fixture is being used; as taught by Jovan) As per Claim 5, the rejection of claim 1 is hereby incorporated by reference; Ames as modified further teaches wherein the lighting effect is produced by the change in texture of lighting surface whereas the changed texture is a video. (Fig. 1B, Fig. 7, ¶42,¶55, ¶85 wherein the visualizing module 210 may further help the user 120 to alter view settings such as brightness or contrast of the imported 2D environment wherein a 2D environment may be provided including a 2D image 260. The 2D image 260 may be a photograph, line drawing or video; as taught by Jovan) As per Claim 6, the rejection of claim 1 is hereby incorporated by reference; Ames as modified further teaches wherein the lighting effect is produced by changing the brightness or other environmental parameters to show the effect. (¶55 wherein the visualizing module 210 may further help the user 120 to alter view settings such as brightness or contrast of the imported 2D environment; as taught by Jovan) As per Claim 7, the rejection of claim 1 is hereby incorporated by reference; Ames as modified further teaches wherein the 3D model comprises a camera related feature, and to receive the interaction input on the 3D model, and to display a real environment mapping interaction by capturing a real time environment using a video or image capturing feature of a user device on which the 3D model of the object is being displayed. (¶40 wherein the device can further include at least one camera 550 configured to capture one or more images in the camera's field of view 542, such as an image of the user. The image of the user can be processed using one or more facial and/or gaze tracking algorithms to determine a viewing or gaze direction of the user with respect to the device. The gaze direction can be used to determine an area, interface object, or other portion of the display screen of the computing device the user may be viewing. Accordingly, the rendering of the interface elements can change as the relative gaze direction of the user changes with respect to the device; as taught by Ames) As per Claim 8, the rejection of claim 1 is hereby incorporated by reference; Ames as modified further teaches wherein receiving the interaction input while the virtual interactive display is in any plane or orientation in synchronization with the 3D model. (¶33, ¶35 wherein the rendering of the interface elements can change as the orientation of the device is changed. This can include tilting, rotating, or otherwise changing a position of the device; as taught by Ames) As per Claim 10, the rejection of claim 1 is hereby incorporated by reference; Ames as modified further teaches wherein the multimedia displayed on virtual interactive display shows graphics which have different Graphical User Interface or data in different layers, or containers, or real operating system, or software. (Fig. 3A, Fig. 3B, ¶30, ¶35 wherein computing device 300 can be rendered to have at least two (and in many situations more) different "levels" or z-depths, where the upper level of some interface elements is rendered to appear near the outer surface of the display screen and the upper level of other interface elements can be rendered to appear at a lower level to the interface wherein upon detecting a change in orientation of the computing device 300, the interface 301 is rendered such that the interface elements are divided into a stack-like arrangement, where the interface elements appear to be stacked or otherwise arranged on top of each other; as taught by Ames) As per Claim 11, the rejection of claim 1 is hereby incorporated by reference; Ames as modified further teaches wherein the interactive virtual display shows and allows interaction to a browser running on the display which is connected through a network via network interface of an user device on which the 3D model is being displayed. (Fig. 3A-3B, Fig. 12A-12B, ¶22, ¶35, ¶53 wherein enabling three-dimensional (3D) display and interaction with interfaces (such as a webpage, a content page, an application, etc.) when the device is operating in a 3D view mode wherein the user can make a motion or gesture in the field of view of the device that can cause the device to alter an appearance of the displayed interface elements wherein the speed of appearance or the duration of time at which the second set of content takes to load and render the second page can be based on network (e.g., data connection speed) speed; as taught by Ames) As per Claim 12, the rejection of claim 1 is hereby incorporated by reference; Ames as modified further teaches wherein the interactive virtual display shows and enables interaction to GUI of software running on the display. (Fig. 3, ¶30, ¶65 wherein upon detecting an activation of the 3D view mode, the interface 301 displayed on a display screen 302 of a computing device 300 can be rendered to have at least two different "levels" or z-depths, where the upper level of some interface elements is rendered to appear near the outer surface of the display screen and the upper level of other interface elements can be rendered to appear at a lower level to the interface wherien the device can adjust the appearance of shadows associated with the relevant interface elements to make those elements appear to be higher in the interface, as well as to give a 3D appearance as each shadow can move in position relative to an associated element as the point of view changes; as taught by Ames) As per Claim 13, the rejection of claim 1 is hereby incorporated by reference; Ames as modified further teaches wherein the interactive virtual display shows and enables interaction to a representation of real software or Operating System or Control panel as a layered 2D graphics interactive video, or it loads different layer in run time. (Fig. 1B, Fig. 7, ¶42, ¶85 wherein a 2D environment may be provided including a 2D image 260. The 2D image 260 may be a photograph, line drawing or video wherein The third virtual button 656, which is labeled "Add Products," may be selected by the user 120 to add 3D objects; as taught by Jovan) As per Claim 14, the rejection of claim 1 is hereby incorporated by reference; Ames as modified further teaches wherein the interaction on the interactive virtual display shows 2D graphics or Software or real Operating System which is running on server and connected to user device via network whereas after getting user input the software process on server and transfer the current gui to the virtual interactive surface. (Fig. 1A, Fig. 1B, ¶34, ¶37, ¶42 visualization of 3D models of objects in a 2D environment wherein the 2D image 260 may be saved or imported from a storage device on a remote server wherein The engine 200 for visualization of 3D objects in 2D environment may comprise of local device-based, network-based, or web-based service available on any of the user devices 130. The user may further interact with the web applications 204. The web applications may include social networking services wherein the user may be connected to various social networking services and/or microblogs, such as Facebook.TM., Twitter', and other such networking services. Connection to social networking services and/or microblogs may allow user to interact with his contacts to share and obtain opinion and feedback on image obtained after placing 3D objects in 2D environment. Further, the user may also request help from designing services to arrange 3D objects within a given 2D environment; as taught by Jovan) Claim 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ames in view of Jovan in view of Yi in view of Fial, as applied to claim 1 above, and further in view of Mullins et al. (U.S. Pub 2016/0057511) hereinafter Mullins. As per Claim 9, the rejection of claim 1 is hereby incorporated by reference; Ames as modified previously taught 3D model and interaction input. However, Ames as modified does not explicitly teach wherein the 3D model behave as a virtual machine running an operating system or software, and receiving the interaction input to interact with the operating system or the software. Mullins teaches wherein the 3D model behave as a virtual machine running an operating system or software, and receiving the interaction input to interact with the operating system or the software. (Fig. 3, ¶35, ¶54 wherein features of the 3D virtual machine may include selectable icons on the 3D virtual model of the machine that the user interacts with) It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of remote sensor access and queuing of Mullins with the teaching of three-dimensional object display of Ames as modified because Mullins teaches remotely accessing sensor data from wearable devices wherein augmented reality (AR) applications allow a user to experience information, such as in the form of a three-dimensional (3D) virtual object overlaid on an image of a physical object captured by a camera of a wearable device (¶17, ¶18) Claim 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Ames in view of Jovan in view of Yi in view of Fial, as applied to claim 1 above, and further in view of LEE et al. (U.S. Pub 2018/0364881) hereinafter Lee. As per Claim 15, the rejection of claim 1 is hereby incorporated by reference; Ames as modified previously taught the 3D model , interactive display, interaction input. However, Ames as modified does not explicitly teach wherein the 3D model comprises two or more virtual interactive display, and the interaction input on one of the virtual interactive display results in corresponding multimedia change onto the graphical user interface of other virtual interactive displays. Lee teaches wherein the 3D model comprises two or more virtual interactive display, and the interaction input on one of the virtual interactive display results in corresponding multimedia change onto the graphical user interface of other virtual interactive displays. (Fig. 7F, ¶101, ¶193 wherien A 3D processor (not shown) may further be provided at the rear end of the formatter 360, for processing a signal to exert 3D effects wherein Upon selection of the play icon 732, the controller 170 of the image display apparatus 100 may request a play request to the mobile terminal 600, receive an image played in the mobile terminal 600, and control play and display of a zoomed-in image of the played image) It would have been obvious to one having ordinary skill in the art at the time the invention was filed to utilize the teaching of image display apparatus of Lee with the teaching of three-dimensional object display of Ames as modified because Lee teaches provide an image display apparatus for simply zooming in on a partial area during mirroring between a mobile terminal and the image display apparatus can be accomplished by the provision of an image display apparatus including a display, an interface to exchange data with a mobile terminal, and a controller to, when mirroring with the mobile terminal is performed, control to display a mirrored image corresponding to an image displayed on a display of the mobile terminal, and when a zoom-in display input for a first area being a part of the mirrored image is received in state that the mirrored image is displayed, control to display a zoomed-in image of the first area of the mirrored image on the display (¶6, ¶8) Response to Arguments Applicant's arguments with respect to claim 1 have been considered but are moot in view of the new ground(s) of rejection wherien Yi is relied upon to teach the newly amended limitation reciting “wherien the 3D model is rendered as a textured mesh object in the 3D graphics environment, and” and Fial is relied upon to teach the newly amended limitation reciting “input events directed to the virtual interactive display surface are processed by a distinct interface handler separate from a 3D model interaction handler” wherein Jovan previously taught the limitation reciting “wherein the virtual interactive display surface is in any orientation or perspective in synchronization with the 3D model geometry, and.”. Conclusion The examiner requests, in response to this Office action, support be shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111(c). Inquiry Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANGIE BADAWI whose telephone number is (571)270-7590. The examiner can normally be reached Monday thru Wednesday 9:00am - 5:00pm EST with Thursdays and Fridays off. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached at (571) 272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANGIE BADAWI/Primary Examiner, Art Unit 2179
Read full office action

Prosecution Timeline

Aug 22, 2018
Application Filed
Dec 27, 2018
Response after Non-Final Action
Apr 15, 2020
Non-Final Rejection — §103, §112
Oct 19, 2020
Response Filed
Nov 04, 2020
Final Rejection — §103, §112
May 05, 2021
Request for Continued Examination
May 07, 2021
Response after Non-Final Action
Dec 20, 2021
Non-Final Rejection — §103, §112
Apr 27, 2022
Response Filed
May 24, 2022
Final Rejection — §103, §112
Nov 25, 2022
Request for Continued Examination
Nov 29, 2022
Response after Non-Final Action
May 31, 2023
Non-Final Rejection — §103, §112
Oct 02, 2023
Response Filed
Oct 30, 2023
Final Rejection — §103, §112
Feb 29, 2024
Request for Continued Examination
Mar 05, 2024
Response after Non-Final Action
Oct 23, 2024
Non-Final Rejection — §103, §112
Apr 24, 2025
Response Filed
May 07, 2025
Final Rejection — §103, §112
Nov 10, 2025
Request for Continued Examination
Nov 16, 2025
Response after Non-Final Action
Dec 31, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554394
SYSTEM AND METHOD FOR PROMOTING CONNECTIVITY BETWEEN A MOBILE COMMUNICATION DEVICE AND A VEHICLE TOUCH SCREEN
2y 5m to grant Granted Feb 17, 2026
Patent 12524146
USER INTERFACE INCLUDING MULTIPLE INTERACTION ZONES
2y 5m to grant Granted Jan 13, 2026
Patent 12517639
ONE-HANDED SCALED DOWN USER INTERFACE MODE
2y 5m to grant Granted Jan 06, 2026
Patent 12474813
SYSTEMS AND METHODS FOR AUGMENTED REALITY WITH PRECISE TRACKING
2y 5m to grant Granted Nov 18, 2025
Patent 12455750
MACHINE LEARNING FOR PREDICTING NEXT BEST ACTION
2y 5m to grant Granted Oct 28, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

9-10
Expected OA Rounds
59%
Grant Probability
97%
With Interview (+38.5%)
4y 1m
Median Time to Grant
High
PTA Risk
Based on 285 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month