DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This action is in response to the Amendment filed on 11/18//2025.
Claims 7-13, 18 are pending. Claims 7, 8 have been amended. Claims 1-6, 14-17 have been cancelled. Claim 18 is newly added.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. (FP 7.30.05)This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “3D display volume managing unit” in Claim 7, 10, 11 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. (FP 7.30.06)
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7-13, 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over King (US 20170169610 A1) in view of Poulos et al. (US 20140347390 A1, hereinafter Poulos), further in view of Gribetz et al. (US 20170236320 A1, hereinafter Gribetz).
Regarding Claim 7, King teaches a method for opening and placing an application in an augmented reality environment (King, Paragraph [0039], As schematically shown in FIG. 2, the HMD device 100 may comprise a portal program 200… The portal program 200 may be loaded into memory and its instructions executed by processor 212 to perform one or more of the methods and processes for displaying a third party holographic portal described herein”), the method comprising: receiving a first user input from a user indicating a request for new content (King, Paragraph [0047], “the holographic portal may be displayed in response to a user command for an HMD device 100 to display the holographic portal of a selected person”), [[ the content being associated with a link from a web page ]]; launching an application to generate the content (King, Paragraph [0039], “The portal program 200 may be loaded into memory 208 and its instructions executed by processor” [0042], “Upon receipt of a request 260, the portal program 200 may determine… to display a third party holographic portal”); creating a mini display volume of a 3D display volume managing unit, wherein a page preview is displayed in the mini display volume, wherein the 3D display volume managing unit is created simultaneously with the launching of the application (King, Paragraph [0049], the visual representation of activity in the third party real world three dimensional environment may comprise a current preview 500 of at least a portion of the third party real world three dimensional environment.; [0049]-[0052], Portal program being started and, as port of the process, the preview/holographic window appears that is combined with the preview. It is noted the current preview is smaller, bounded holographic surface shown to the user before the full experience loads or is scaled which is reduced sized, self-contained display space as mini display volume); receiving a second user input indicating a movement of the mini display volume (King, Paragraph [0055], “Once the holographic window 330 is displayed, user Adam 320 may move the location of the window…may use voice commands, physical gestures, eye gaze inputs, or any other suitable user input to the HMD device 100”); receiving a third user input indicating [[ a placement of the mini display volume at a location in the augmented reality environment ]] (King, Paragraph [0055], user Adam 320 may provide user input to HMD device 100 that designates a world-locked location in his living room 300 as the real world location where the holographic window 330 portal is to be displayed…user Adam 320 may verbally state "Put Robin's portal over the couch"); and expanding the 3D display volume managing unit [[ in place of the mini display volume at the location ]] the 3D display volume managing unit displaying the content fully loaded within the 3D display volume managing unit (King, Paragraph [0055]-[0057], [0068]-[0074], “In some examples user Adam 320 may provide scaling input…to increase or decrease a size of the holographic window “Adam 320 may move the location of the window within the living room” “he may monitor the game and easily glance to the current preview 500 of activity in Robin's family room by a slight shift of his gaze” to full scene “a holographic visual representation 630” with geo-located objects/people in the world-locked position in the environment).
King does not explicitly disclose but Poulos teaches receiving a third user input indicating a placement of the mini display volume at a location in the augmented reality environment (Poulos, Paragraph [0012], [0016]-[0022], In a world-locking approach, a virtual object may be locked to a reference location in the real-world environment” “user 104 may be used to position virtual objects. It will be appreciated that "positioning" as used herein may refer to two-dimensional and three-dimensional positioning of virtual objects” “BLDV 108 may be updated and body-locked virtual objects repositioned if it is determined that the sensor data corresponds to body motion, rather than just a head motion”).
Poulos and King are analogous since both of them are dealing with displaying and manipulating virtual content in an augmented reality environment, including user-controlled positioning of such content in 3D space. King provided a way of launching an AR application that creates a movable holographic window (mini display volume) showing a current preview, allowing the user to move and place it in AR scene then expand to show fully loaded scene. Poulos provided a way of controlling spatial placement of AR objects using explicit world-locking and body-locking modes, thereby fixing the objects position relative to physical location in real-world environment or the user. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the spatial locking and placement model taught by Poulos into modified invention off King such that when the mini display volume is placed in the AR environment, it is fixed in location using world-locking or body-locking to ensure the 3D display volume expands in place at a persistent location in the AR scene.
The combination of King, Poulos and Gribetz does not explicitly disclose but Gribetz teaches the content being associated with a link from a web page (Gribetz, Paragraph [0030), " An HTML document includes a plurality of HTML elements .... an HTML element is used to embed or tether a virtual element within the document"; [0031], "an image tag (e.g., an IMG tag) may be used to tether a virtual element within the document"; (0034), "a HTML Document Division Element (DIV), a HTML header element {HI), an HTML <a> href Attribute, a HTML table element, and the like to name but a few"; [0030], "In general, HTML is composed of a tree of HTML elements and other nodes ... Elements can also have content, including other elements and text"; [0068], "in FIG. 3, a virtual frame 316 may be rendered in
the virtual 3-D space. In one example, the virtual frame is a high-fidelity virtual element. The virtual frame 316 may be used to display virtual digital content ... The virtual frame 316 may present digital content from a computer file, such as a document or a webpage including a thumbnail image 301 associated with a virtual element and a user button 302").
Gribetz and King are analogous since both of them are dealing with displaying and manipulating virtual content in an augmented reality environment, including user-controlled positioning and interaction with content in 3D space. King provided a way of launching an AR application that creates a movable holographic window (mini display volume) showing a current preview, allowing the user to move and place it in AR scene then expand to show fully loaded scene. Gribetz provided a system for displaying HTML documents and web pages with tethered 3D virtual elements within an augmented reality environment where virtual elements are associated with links and content within web pages. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the web-based content delivery and link-associated virtual elements taught by Gribetz into the modified invention of King such that the content being displayed in the mini display volume and subsequently in the expanded 3D display volume managing unit would be associated with a link from a web page, thereby providing users with ability to browse and interact with web-based content in an immersive AR environment and access rich 3D content directly from web pages.
Regarding Claim 8, the combination of King, Poulos and Gribetz teaches the invention in Claim 7.
The combination further teaches wherein the first user input is a cursor movement over the link on the web page displayed in a browser application displayed inside of a 3D bounded display volume within the user's landscape (Gribetz, Paragraph [0069], FIG. 3, 5, describing "a virtual frame 316 may be rendered in the virtual 3-D space" "The virtual frame 316 may present digital content from a computer file, such as a document or a webpage"; [0071], "a virtual frame 510 anchored virtual map coordinates 511 in a virtual 3-0 space corresponding to a table top 512. The virtual frame presents digital content, such as, for example, a document or a webpage. In one example, a browser of the client system accesses the digital content (e.g., web content) from a publisher and renders the content as mapped and rendered in the virtual frame according to the specified spatial coordinates of the frame in the virtual 3-D space."; [0035], "system parses or processes the accessed digital content according to the application accessing the digital content. For example, the system may implement a web browser application, which renders the HTML elements of the document for presentation to a user"; [0036], "as the system parses the elements of an HTML document for rendering on a display or for projection by a light source, the system determines whether any HTML element corresponds to a virtual element"; [0072], "the client system renders the low-fidelity virtual element within the bounding volume positioned adjacent to the frame"; [0073], "as shown in FIG. S) within the virtual space at a position determined by the virtual coordinates of the rendered HTML element of the digital content presented in the frame"; [0021], "These systems create a virtual 3-D space based on, for example, input translated from real-world 3-D point data observed by one or more sensors sensor of the display system to overlay the virtual 3-D space over the mapped real world environment of the viewer"; [0066], "a user environment 300 as viewed <read on user's landscape> through a stereographic augmented or virtual reality system by the user includes both real world objects and virtual elements"; [0037], "a hover-over-the-button input, to display a message and a hyperlink").
Gribetz and King are analogous since both of them are dealing with user interaction and content display in augmented reality environments. King provided a way of launching an AR application triggered by user input. Gribetz provided a comprehensive system for rendering web browsers and HTML content with links inside 3D bounded display volumes (virtual frames and bounding volumes) within a user's AR landscape, where users can interact with web content including links to trigger display of associated 3D virtual elements. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the web browser interface with 3D bounded display volumes taught by Gribetz into the modified invention of King such that the first user input indicating a request for new content would be a cursor movement over a link on a web page displayed in a browser application displayed inside of a 3D bounded display volume within the user's AR landscape, thereby providing users with familiar web browsing interaction paradigms within an immersive AR environment and enabling seamless access to 3D content through standard web navigation.
Regarding Claim 9, the combination of King, Poulos and Gribetz teaches the invention in Claim 8.
The combination further teaches movement of the mini display volume (King, Paragraph [0055], “user Adam 320 may move the location of the window…use voice commands, physical gestures, eye gaze inputs, or any other suitable user input to the HMD device 100”)
King does not explicitly disclose but Gribetz teaches wherein the second user input is a selection of the link, and movement of the mini display volume (Gribetz, Paragraph [0076], “input to move the low-fidelity virtual element outside of the volume and a "release" action or input outside of the volume trigger the downloading and rendering of the high-fidelity virtual element”; [0056], “a grabbing of the low-fidelity virtual element, movement of the low fidelity virtual element in a direction”; [0022], “user input may comprise gesture-based input and/or other input… gestures including one or more of reaching, grabbing, releasing, touching, swiping, pointing, poking and/or other gestures may be identified”; it is noted a single user input comprising a grab gesture with a movement gesture constitutes the second user input that both selects the link content and moves the mini display volume containing that content).
As explained in rejection of claim 8, the obviousness for combining of user selection of Gribetz into King is provided above.
Regarding Claim 10, the combination of King, Poulos and Gribetz teaches the invention in Claim 7.
The combination further teaches wherein the 3D display volume managing unit replaces the mini display when the 3D display volume managing unit expands in place of the mini display (King, Paragraph [0019], [0049]-[0052], [0055]-[0057], [0060], enable a wearer to perceive a 3D holographic image located within the physical environment that the wearer is viewing…Holographic window with current preview 500 is moved/placed than can be scaled/expanded in place to full content; and content of the space can be changed or modified).
Regarding Claim 11, the combination of King, Poulos and Gribetz teaches the invention in Claim 7.
The combination further teaches wherein the content is loaded into the 3D display volume managing unit while the user is moving the mini display (King, Paragraph [0049]-[0055], “the current preview 500 may display to user Adam 320 a live, holographic visual representation of activity in the room, including holographic representations of Robin 420 and the other person 450 and their movements”; it is noted preview window shows live current preview while user moves it).
Regarding Claim 12, the combination of King, Poulos and Gribetz teaches the invention in Claim 7.
The combination further teaches wherein the location is fixed to an object in the augmented reality environment (King Paragraph [0055], “user Adam 320 may provide user input to HMD device 100 that designates a world-locked location in his living room 300 as the real world location where the holographic window 330 portal is to be displayed”).
Regarding Claim 13, the combination of King, Poulos and Gribetz teaches the invention in Claim 12.
The combination further teaches wherein the object is the user (Poulos, Paragraph [0004]-[0009], [0012], [0014], “a method of positioning body-locked virtual objects in an augmented reality environment” “a world-locking approach, a virtual object may be locked to a reference location in the real-world environment, such that the location of the virtual object remains constant relative to the real-world environment regardless of the position or angle from which it is viewed” “with an object such as a virtual television, a user of a head-mounted display device may wish to keep the object persistently in view”; it is noted when body-locked, the virtual object moves in correspondence with the movement of the user’s head or body so that the virtual object appears in a fixed location from the perspective of the user).
Poulos and King are analogous since both of them are dealing with displaying and manipulating virtual content in an augmented reality environment, including user-controlled positioning of such content in 3D space. King provided a way of launching an AR application that creates a movable holographic window (mini display volume) showing a current preview, allowing the user to move and place it in AR scene then expand to show fully loaded scene. Poulos provided a way of controlling spatial placement of AR objects using explicit world-locking and body-locking modes including user object. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the user object in spatial locking and placement model taught by Poulos into modified invention off King such that when the mini display volume is placed in the AR environment, system will be able to provide more flexible object handling including user object which extend more functionality of the system when viewing object in the AR scene.
Regarding Claim 18, King teaches a method for opening and placing an application in an augmented reality environment (King, Paragraph [0039], As schematically shown in FIG. 2, the HMD device 100 may comprise a portal program 200… The portal program 200 may be loaded into memory and its instructions executed by processor 212 to perform one or more of the methods and processes for displaying a third party holographic portal described herein”), the method comprising: receiving a first user input from a user indicating a request for new content (King, Paragraph [0047], “the holographic portal may be displayed in response to a user command for an HMD device 100 to display the holographic portal of a selected person”); launching an application to generate the content (King, Paragraph [0039], “The portal program 200 may be loaded into memory 208 and its instructions executed by processor” [0042], “Upon receipt of a request 260, the portal program 200 may determine… to display a third party holographic portal”); creating a mini display volume of a 3D display volume managing unit, wherein a page preview is displayed in the mini display volume, wherein the 3D display volume managing unit is created simultaneously with the launching of the application (King, Paragraph [0049], the visual representation of activity in the third party real world three dimensional environment may comprise a current preview 500 of at least a portion of the third party real world three dimensional environment.; [0049]-[0052], Portal program being started and, as port of the process, the preview/holographic window appears that is combined with the preview. It is noted the current preview is smaller, bounded holographic surface shown to the user before the full experience loads or is scaled which is reduced sized, self-contained display space as mini display volume); receiving a second user input indicating a movement of the mini display volume (King, Paragraph [0055], “Once the holographic window 330 is displayed, user Adam 320 may move the location of the window…may use voice commands, physical gestures, eye gaze inputs, or any other suitable user input to the HMD device 100”); receiving a third user input [[ indicating a placement of the mini display volume at a location in the augmented reality environment ]] (King, Paragraph [0055], user Adam 320 may provide user input to HMD device 100 that designates a world-locked location in his living room 300 as the real world location where the holographic window 330 portal is to be displayed…user Adam 320 may verbally state "Put Robin's portal over the couch"); and expanding the 3D display volume managing unit [[ in place of the mini display volume at the location ]], the 3D display volume managing unit displaying the content fully loaded within the 3D display volume managing unit (King, Paragraph [0055]-[0057], [0068]-[0074], “In some examples user Adam 320 may provide scaling input…to increase or decrease a size of the holographic window “Adam 320 may move the location of the window within the living room” “he may monitor the game and easily glance to the current preview 500 of activity in Robin's family room by a slight shift of his gaze” to full scene “a holographic visual representation 630” with geo-located objects/people in the world-locked position in the environment).
King does not explicitly disclose but Poulos teaches receiving a third user input indicating a placement of the mini display volume at a location in the augmented reality environment (Poulos, Paragraph [0012], [0016]-[0022], In a world-locking approach, a virtual object may be locked to a reference location in the real-world environment” “user 104 may be used to position virtual objects. It will be appreciated that "positioning" as used herein may refer to two-dimensional and three-dimensional positioning of virtual objects” “BLDV 108 may be updated and body-locked virtual objects repositioned if it is determined that the sensor data corresponds to body motion, rather than just a head motion”),
Poulos and King are analogous since both of them are dealing with displaying and manipulating virtual content in an augmented reality environment, including user-controlled positioning of such content in 3D space. King provided a way of launching an AR application that creates a movable holographic window (mini display volume) showing a current preview, allowing the user to move and place it in AR scene then expand to show fully loaded scene. Poulos provided a way of controlling spatial placement of AR objects using explicit world-locking and body-locking modes, thereby fixing the objects position relative to physical location in real-world environment or the user. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the spatial locking and placement model taught by Poulos into modified invention off King such that when the mini display volume is placed in the AR environment, it is fixed in location using world-locking or body-locking to ensure the 3D display volume expands in place at a persistent location in the AR scene.
The combination does not explicitly disclose but Gribetz teaches wherein the first user input is a cursor movement over a link on a web page (Gribetz, Paragraph (0030), " An HTML document includes a plurality of HTML elements .... an HTML element is used to embed or tether a virtual element within the document"; [0031), "an image tag (e.g., an IMG tag) may be used to tether a virtual element within the document"; (0034), "a HTML Document Division Element (DIV), a HTML header element {HI), an HTML <a> href Attribute, a HTML table element, and the like to name but a few"; [0030), "In general, HTML is composed of a tree of HTML elements and other nodes ... Elements can also have content, including other elements and text"; [0068), "in FIG. 3, a virtual frame 316 may be rendered in
the virtual 3-D space. In one example, the virtual frame is a high-fidelity virtual element. The virtual frame 316 may be used to display virtual digital content ... The virtual frame 316 may present digital content from a computer file, such as a document or a webpage including a thumbnail image 301 associated with a virtual element and a user button 302"); displayed in a browser application displayed inside of a 3D bounded display volume within the user's landscape (Gribetz, Paragraph [0069], FIG. 3, 5, describing "a virtual frame 316 may be rendered in the virtual 3-D space" "The virtual frame 316 may present digital content from a computer file, such as a document or a webpage"; [0071], "a virtual frame 510 anchored virtual map coordinates 511 in a virtual 3-0 space corresponding to a table top 512. The virtual frame presents digital content, such as, for example, a document or a webpage. In one example, a browser of the client system accesses the digital content (e.g., web content) from a publisher and renders the content as mapped and rendered in the virtual frame according to the specified spatial coordinates of the frame in the virtual 3-D space."; [0035], "system parses or processes the accessed digital content according to the application accessing the digital content. For example, the system may implement a web browser application, which renders the HTML elements of the document for presentation to a user"; [0036], "as the system parses the elements of an HTML document for rendering on a display or for projection by a light source, the system determines whether any HTML element corresponds to a virtual element"; [0072], "the client system renders the low-fidelity virtual element within the bounding volume positioned adjacent to the frame"; [0073], "as shown in FIG. S) within the virtual space at a position determined by the virtual coordinates of the rendered HTML element of the digital content presented in the frame"; [0021], "These systems create a virtual 3-D space based on, for example, input translated from real-world 3-D point data observed by one or more sensors sensor of the display system to overlay the virtual 3-D space over the mapped real world environment of the viewer"; [0066], "a user environment 300 as viewed <read on user's landscape> through a stereographic augmented or virtual reality system by the user includes both real world objects and virtual elements"; [0037], "a hover-over-the-button input, to display a message and a hyperlink").
Gribetz and King are analogous since both of them are dealing with displaying and manipulating virtual content in an augmented reality environment, including user-controlled positioning and interaction with content in 3D space. King provided a way of launching an AR application that creates a movable holographic window (mini display volume) showing a current preview, allowing the user to move and place it in AR scene then expand to show fully loaded scene. Gribetz provided a system comprehensive system for rendering web browsers and HTML content with links inside 3D bounded display volumes within a user's AR landscape, where users can interact with web content including links to trigger display of associated 3D virtual elements for displaying HTML documents and web pages with tethered 3D virtual elements within an augmented reality environment where virtual elements are associated with links and content within web pages. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the web-based content delivery and link-associated virtual elements taught by Gribetz into the modified invention of King such that the content being displayed in the mini display volume and subsequently in the expanded 3D display volume managing unit would be associated with a link from a web page, thereby providing users with ability to browse and interact with web-based content in an immersive AR environment and access rich 3D content directly from web pages.
Response to Arguments
Applicant’s arguments with respect to claim 7, filed on 11/18/2025, with respect to rejection under 35 USC § 103 in regard to prior art does not teaches the limitation(s) have been considered but are moot in view of the new ground(s) of rejection. it has now been taught by the combination of prior arts King, Poulos and Gribetz.
In regard to Claims 8-13 they directly/indirectly depends on independent Claim 1, 7. Applicant does not argue anything other than the independent claim 7. The limitations in those claims in conjunction with combination previously established as explained.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
US 20120293558 A1 Manipulating graphical objects.
US 20160328369 A1 Analyzing a click path in a spherical landscape viewport
US 6057856 A 3D virtual reality multi-user interaction with superimposed positional information display for each user
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUJANG TSWEI whose telephone number is (571)272-6669. The examiner can normally be reached 8:30am-5:30pm EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571)272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/YuJang Tswei/Primary Examiner, Art Unit 2614