DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Priority
Applicant’s claim for the benefit of a prior-filed application under 35 U.S.C. 119(e) or under 35 U.S.C. 120, 121, 365(c), or 386(c) is acknowledged. Applicant has not complied with one or more conditions for receiving the benefit of an earlier filing date under 35 U.S.C. 120 as follows:
This application is claiming the benefit as a continuation in part of prior-filed application No. 16885253 under 35 U.S.C. 120, 121, 365(c), or 386(c). Copendency between the current application and the prior application is required. Since the applications are not copending, the benefit claim to the prior-filed application is improper. Applicant is required to delete the claim to the benefit of the prior-filed application, unless applicant can establish copendency between the applications.
A notice to file missing parts was mailed in application No. 16885253 on June 8, 2020, setting a two month period for reply. No reply was filed in response thereto, resulting in abandonment of that application on August 10, 2020. The present application was filed November 27, 2020. It is noted that the period for reply to the notice to file missing parts was extendable by payment of the extension fee under the provisions of 37 CFR 1.136(a), but a review of application No. 16885253 does not indicate that an extension was requested. Applicant is requested to clarify the basis for their priority claim if applicant believes it is entitled to claim priority to application No. 16885253 as a continuation in part.
Response to Amendment
Applicant’s amendment and remarks filed September 22, 2025, are responsive to the office action mailed March 20, 2025. Claims 1-18 were previously pending with claims 1-8 having been withdrawn from consideration in applicant’s amendment filed February 7, 2025, pursuant to the Restriction/Election requirement mailed July 31, 2023. Claims 9-18 have been cancelled and claims 19-28 are new. Claims 1-8 and 19-28 are therefore currently pending with claims 1-8 having been withdrawn from consideration. Claims 19-28 are considered in this office action.
Pertaining to claim objections in the previous office action
The numbering of claims was not in accordance with 37 CFR 1.126 which requires the original numbering of the claims to be preserved throughout the prosecution. The amendment cancelling claims 9-18 has rendered moot this objection to those claims. Applicant should note claims 1-8 would be objected to if they were not withdrawn from consideration.
Pertaining to rejection under 35 USC § 112 in the previous office action
Claims 10 and 12-18 were rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. These claims have been cancelled.
Response to Arguments
Pertaining to rejection under 35 USC § 101 in the previous office action
Applicant's arguments filed September 22, 2025, have been fully considered but they are not persuasive. Claims 19-28 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Applicant argues claim 19 “recites operations explicitly tied to concrete device components of a user-side AR/VR device, including location sensors used to detect user entry into and exit from a geo-fence.” Location sensors are not recited in the rejected claims. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Applicant argues the device components are not generic processors but are specialized AR/VR subsystems. There is no disclosure of these device components being anything but generic processors and in any case if they represent an improvement that improvement would need to be identifiable as the subject matter appearing in the claims. The programmed computer or "special purpose computer" test of In re Alappat, 33 F.3d 1526, 31 USPQ2d 1545 (Fed. Cir. 1994) (i.e., the rationale that an otherwise ineligible algorithm or software could be made patent-eligible by merely adding a generic computer to the claim for the "special purpose" of executing the algorithm or software) was superseded by the Supreme Court’s Bilski and Alice Corp. decisions. Eon Corp. IP Holdings LLC v. AT&T Mobility LLC, 785 F.3d 616, 623, 114 USPQ2d 1711, 1715 (Fed. Cir. 2015) ("[W]e note that Alappat has been superseded by Bilski, 561 U.S. at 605–06, and Alice Corp. v. CLS Bank Int’l, 573 U.S. 208, 110 USPQ2d 1976 (2014)"); Intellectual Ventures I LLC v. Capital One Bank (USA), N.A., 792 F.3d 1363, 1366, 115 USPQ2d 1636, 1639 (Fed. Cir. 2015) ("An abstract idea does not become nonabstract by limiting the invention to a particular field of use or technological environment, such as the Internet [or] a computer"). The additional elements are analyzed in detail and the rationale is provided in the detailed rejection below.
Pertaining to rejection under 35 USC § 102 in the previous office action
Applicant’s arguments, see remarks filed September 22, 2025, with respect to the rejection of claims under 35 USC 102 have been fully considered and are overall persuasive based on the amendment, but see the following responses for more information. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of 35 USC 103.
Applicant argues “distinct privacy modes in which a user can selectively operate in an off-grid mode or an on-grid mode” is not in the previously relied upon prior art. See the Tung reference and the detailed rejection below.
Applicant argues the previously relied upon prior art does not disclose “determining a region of gaze of the user using sensors of the AR/VR device.” This is incorrect as clearly described in the portions of the reference cited below.
Applicant argues the previously relied upon prior art does not disclose “a machine-learning prediction module that predicts a future location of the user based on a current location, a current travel direction, and historical route data, and pre-generates relevant AR/VR elements for the predicted location.” See the Tung reference and the detailed rejection below.
Applicant argues the previously relied upon prior art does not disclose “dynamically adjusting the size of the geo-fence in real time based on temporal factors, including time of day, and based on user-specific attributes detected by a user tracking module.” This is disclosed in the combination of references as explained in the detailed rejection below.
Applicant argues the previously relied upon prior art does not require “that user interactions with the AR/VR billboard content may comprise voice queries, text queries, and gestures as detected by the user AR/VR device.” This is incorrect as these inputs are disclosed in the prior art, however applicant should also note that not all of them are required based on the recitation in the claim that the limitation only includes “at least one of” those elements. The limitation is therefore anticipated by the presence of only one.
Applicant argues “claim 19 requires implementing post-session chatbot operations based on user interactions.” Chatbot operations are not recited in claim 19. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993).
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: the AR/VR “engine” in claims 19 and 26-27, the machine-learning “module” in claim 19, and the user tracking “module” in claim 22.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. Based on the specification the structures are entirely embodied in software.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Objections
Claim 22 is objected to because of the following informalities: “the user tracking module” should be “a user tracking module.” Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(d):
(d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph:
Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers.
Claims 26-27 are rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which they depend, or for failing to include all the limitations of the claim upon which they depend.
These claims recite
“26. (New) The computerized method of claim 19, further comprising detecting, by the AR/VR engine, that the user AR/VR device has left the geo-fence.
27. (New) The computerized method of claim 19, further comprising stopping, by the AR/VR engine, access to the AR/VR billboard content in the user AR/VR device responsive to detecting that the user AR/VR device has left the geo-fence.”
The claims depend from claim 19. The final limitation in claim 19 recites
“detecting, by the AR/VR engine using location data of the user-side AR/VR device, that the user has exited the geo-fence, and in response, stopping access to the AR/VR billboard content in the user-side AR/VR device.”
Claims 26 and 27 therefore fail to further limit the subject matter of claim 19, from which they depend. Applicant may cancel the claims, amend the claims to place them in proper dependent form, rewrite the claims in independent form, or present a sufficient showing that the dependent claims comply with the statutory requirements.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 19-28 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention (i.e., process, machine, manufacture, or composition of matter) (step 1). If the claim does fall within one of the statutory categories, it must then be determined whether the claim is directed to a judicial exception (i.e., law of nature, natural phenomenon, and abstract idea) (step 2A), and if so, it must additionally be determined whether the claim is a patent-eligible application of the exception (step 2B). Alice Corp. Pty. Ltd. v. CLS Bank Int'l, 134 S. Ct. 2347, 189 L. Ed. 2d 296, 2014 U.S. LEXIS 4303, 110 U.S.P.Q.2D (BNA) 1976, 82 U.S.L.W. 4508, 24 Fla. L. Weekly Fed. S 870, 2014 WL 2765283 (U.S. 2014); MPEP 2106.
Step 1:
In the instant case claims 19-28 are directed to a process. All claims are therefore within statutory categories. See MPEP 2106.03, Eligibility Step 1.
Step 2A, Prong 1:
These claims also recite, inter alia,
“determining, by an AR/VR engine, AR/VR billboard content; setting, by the AR/VR engine, a geo-fence associated with the AR/VR billboard; detecting, by the AR/VR engine using location data of a user…, that a user … has entered the geo-fence, wherein the AR/VR billboard comprises a set of digital AR/VR elements that are viewable by the user … while the user is within the geo-fence; enabling, by the AR/VR engine, the user to toggle a user mode …;
implementing, by the AR/VR engine, an off-grid mode for the user, wherein in the off-grid mode the user searches for a specified service and interacts with a service provider via the AR/VR billboard in a field of view of the user while remaining private; determining, by the AR/VR engine, that the user is in an on-grid mode; determining, by the AR/VR engine using gaze-tracking input of the user …, a region of gaze of the user; predicting, by a machine-learning module of the AR/VR engine, a future location of the user based on a current location of the user, a current travel direction of the user, and historical route data of the user, and pre-generating, by the AR/VR engine, AR/VR elements for display in the predicted future location of the user; dynamically updating, by the AR/VR engine, a size of the geo-fence based on contextual factors including at least one of a time of day or a user attribute, wherein when the user is detected as a returning customer the geo-fence is increased to a greater size than when the user is detected as commuting by a business associated with the AR/VR billboard; communicating, by the AR/VR engine …, the AR/VR billboard content to the user…; displaying, … the AR/VR billboard content; detecting, by the AR/VR engine, a user interaction with the AR/VR billboard content, the user interaction comprising at least one of a voice query …, a text query …, or a gesture …; implementing, by the AR/VR engine, a specified action based on the user interaction, wherein the specified action comprises at least one of: making a reservation for the user; forwarding a customer query of the user to a customer service representative; initiating an e-commerce transaction or a charging operation; ordering an item for delivery to the user; updating a customer profile associated with the user; sending a reservation reminder to the user; or following up with the customer service representative that a sale item has been prepared for the user to view or purchase; and detecting, by the AR/VR engine using location data of the user…, that the user has exited the geo-fence, and in response, stopping access to the AR/VR billboard content ….” Claim 19.
With recited additional elements reserved for consideration under step 2A prong two, a careful analysis of the remaining limitations above, each on its own and all together combined, results in the conclusion that each on its own recites an abstract idea and in combination they simply recite a more detailed abstract idea. The recited abstract idea falls within the grouping of abstract ideas described as certain methods of organizing human activity, for example commercial interactions (including advertising, marketing or sales activities or behaviors). See MPEP 2106.04(a); Eligibility Step 2A1. The claims are therefore analyzed under the second prong of Eligibility Step 2 (Step 2A2; MPEP 2106.04(d)).
Step 2A, Prong 2:
In order to address prong 2 (MPEP 2106.04(d), Eligibility Step2A2) we identify additional elements beyond the abstract ideas and determine whether those additional elements integrate the abstract idea into a practical application. MPEP 2106.04(d), Eligibility Step 2A2. The additional elements in the present claims are a user-side AR/VR device with a display subsystem and a microphone, and a network interface. These additional elements have been considered individually and altogether as a whole with the functions they perform, e.g., the AR/VR device serves as a communication device standing in the place of the user and serving only input, output, and user communication functions. The network interface although not recited in any substance at least implies the presence of additional elements required to implement a network and communicate with or through it, but in any case that is its only function. Together these additional elements embody a broad, generic recitation of a technological field of use, but no more. The claims indicate that they are directed to a “computerized method,” but all substantive functions recited in the claims are identified only by way of their intended results rather than by how they are implemented and by what computer.
Based on this analysis, the claim is almost entirely a recitation of abstract ideas. The substantive process is recited without indicating any particular functional acts performed by any additional element to perform the steps or otherwise obtain the intended results. The additional elements do not improve the functioning of any computer or other technology or technical field, they do not apply the judicial exception with or by use of a particular machine, they do not transform or reduce a particular article to a different state or thing, and they fail to apply or use the judicial exception beyond generally linking the use of the judicial exception to a particular technological environment. See MPEP 2106.05.
If the disclosure describes any improvements to the functioning of a computer or to any other technology or technical field this improvement would need to be identifiable as the subject matter appearing in the claims. An indication that the claimed invention provides an improvement can include a discussion in the specification that identifies technical improvements realized by the claim over the prior art. The disclosure must provide sufficient details such that one of ordinary skill in the art would recognize the claimed invention as providing an improvement. MPEP 2106.05(a).
Claim limitations can integrate a judicial exception into a practical application by implementing the judicial exception with or using it in conjunction with a particular machine or manufacture that is integral to the claim. A general purpose computer that applies a judicial exception by use of generic computer functions does not qualify as a particular machine. Ultramercial, Inc. v. Hulu, LLC, (Fed. Cir. 2014); MPEP 2106.05(b),(f). There are no particular machines or manufactures identified in the present claims. Claimed elements that are not abstract are identified broadly and generally as applying the method, and the method itself is described only by way of the intended functional results of unidentified activities, without reference to any particular functional acts or specific functions performed by any particularly identified machines, and without reference to its use in conjunction with any particular item of manufacture.
The claims do not affect the transformation or reduction of a particular article to a different state or thing. Changing to a different state or thing means more than simply using an article or changing the location of an article. A new or different function or use can be evidence that an article has been transformed. Purely mental processes in which data, thoughts, impressions, or human based actions are "changed" are not considered a transformation. MPEP 2106.05(c).
The claims do not apply or use the judicial exception in any other meaningful way beyond generally linking the use of the judicial exception to a particular technological environment. As a result the claim as a whole appears to be a drafting effort designed to monopolize the exception. MPEP 2106.05(e),(h).
The additional elements have not been found to integrate the abstract idea into a practical application.
Step 2B:
Although the additional elements have not been found to integrate the abstract idea into a practical application the claims could still be eligible if they recite additional elements that amount to an inventive concept (“significantly more” than the judicial exception). MPEP 2106.05, Eligibility Step 2B.
The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the sparse additional elements are merely props supporting instructions to implement the abstract idea in a computer field of use. MPEP 2106.05(f). Simply adding a general purpose computer or computer components after the fact to an abstract idea does not provide significantly more. MPEP 2106.05(f)(2); see also OIP Techs., Inc. v. Amazon.com, Inc., 788 F.3d 1359, 2015 U.S. App. LEXIS 9721, 115 U.S.P.Q.2D (BNA) 1090 (Fed. Cir. 2015) (“relying on a computer to perform routine tasks more quickly or more accurately is insufficient to render a claim patent eligible.”). The elements are recited at a high level of generality, merely implement abstract ideas potentially using generic computers, and they fail to present a technical solution to a technical problem created by the use of the technology. Limitations that amount to merely indicating a field of use or technological environment in which to apply a judicial exception do not amount to significantly more than the exception itself. See Ret. Capital Access Mgmt. Co. v. U.S. Bancorp, 611 Fed. Appx. 1007, 2015 U.S. App. LEXIS 14351 (Fed. Cir. 2015) (“It may be very clever; it may be very useful in a commercial context, but they are still abstract ideas,” said Circuit Judge Alan Lourie.). MPEP 2106.05(h).
Finally, dependent claims 20-28 do not add "significantly more" to establish eligibility because they merely recite additional abstract ideas that further describe the data being input and output during implementation of the abstract idea. A more detailed abstract idea is still abstract. PricePlay.com, Inc. v. AOL Adver., Inc., 627 Fed. Appx. 925, 2016 U.S. App. LEXIS 611, 2016 WL 80002 (Fed. Cir. Jan. 7, 2016) (in addressing a bundle of abstract ideas stacked together during oral argument, U.S. Circuit Judge Kimberly Moore said, "All of these ideas are abstract…. It’s like you want a patent because you combined two abstract ideas and say two is better than one.").
All of the above leads to the conclusion that additional claim elements do not provide meaningful limitations to transform the claimed subject matter into significantly more than an abstract idea. MPEP 2106.05; Eligibility Step 2B. As a result the claims are rejected under 35 USC 101 as being directed to non-statutory subject matter because they recite an abstract idea without being directed to a practical application, and they do not amount to significantly more than the abstract idea. MPEP 2106.05, supra..
The preceding analysis applies to all statutory categories of invention. Accordingly, claims 1-28 are rejected as ineligible for patenting under 35 USC 101 based upon the same analysis.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 19-28 are rejected under 35 U.S.C. 103 as being unpatentable over Rathod (Paper No. 20250314; Pub. No. WO 2020/148659 A2) in view of Tung (Pub. No.: US 2015/0350351 A1).
Rathod teaches a computerized method for implementing an augmented reality (AR)/virtual reality (VR) billboard, and further discloses, regarding
Claim 19. A computerized method for implementing an augmented reality (AR)/virtual reality (VR) billboard comprising: ● determining, by an AR/VR engine, AR/VR billboard content (see at least Rathod abstract “preparing contents,” p.20:20-30 “enabling the sponsor or advertiser or publisher or user to define augmented reality,” p.41:25-35 “capture media via suggested media capture controls, selected object from augmented reality system or application, automatically captured photos or recorded videos or live streaming”); ● setting, by the AR/VR engine, a geo-fence of the AR/VR billboard (see at least Rathod abstract “pre-defined geofence,” p.5:20-22, 29-35 “pre-defined location or place or geofence marked as augmented reality scannable locations.... pre-defined or drawn geofence boundary,” p.20:20-29 “draw geofence boundaries on map”); ● detecting, by the AR/VR engine using location data of a user-side AR/VR device, that a user with the user-side AR/VR device has entered the geo-fence, wherein the AR/VR billboard comprises a set of digital AR/VR elements that are viewable by the user-side AR/VR device while the user is within the geo-fence (see at least Rathod p.5:20-22 “provide or notify augmented reality scannable locations based on monitored and tracked user device location and pre-defined location or place or geofence marked as augmented reality scannable locations,” p.21:5-10 “received current location from user device, wherein location comprises current location associated or identified place and geo-fence information,” p.24:1-6 “identifying current location of identified and determined vehicles associated place or marked location, wherein place or marked location or geofence comprises”);
● enabling, by the AR/VR engine, the user to toggle a user mode in the user-side AR/VR device (see at least Rathod p.3:12-25 “interactions with user device including rotate device, shake device, particular type of touch, swipe, tap on object including one or more types of one or more touches, taps, swipes on object, pinch in and pinch on object while conducting augmented reality scan, tap anywhere on device, providing voice command, identifying current location and associated place name, type and data, current date and time, timer, front camera or back camera user's face or body expressions and movement, eye expression and movement based on eye tracking system, recognizing type, name, location of object or recognized object associated one or more types, name, value of data, metadata, and keywords, holding & duration whether or not user is holding the device, and for how long (direct reading of touch sensor), … movement of device including changes in device orientation … identification of number or ranges of steps of walking, proximity estimated distance in cm to proximal object”, p22:5-15 “reaction comprises like, dislike, ratings, comments, reviews good, wow, bad, excellent, very good, average, scales, expressions, emotions, emotions, moods, style, refer, selected type of reaction or keyword or question. … call-to-actions comprises buy, follow, connect, install application, visit website, share contact details, invite, refer, order, participate in deals or events, register, subscribe, play, view, listen, taste, and claim offer,” p.50:15-35 “user can provide one or more types of face expressions 330 ( display to user or work in background silent mode),” p.79:30-35 “displaying of next or third or subsequent reaction "Video Review" 1740 and automatically display camera application in video recording mode”).
Rathod teaches all of the above, and all of the below, as noted, and discloses a) setting a geo-fence of AR/VR content, b) detecting that a user with the user-side AR/VR device has entered the geo-fence, c) enabling the user to toggle a user mode, and d) implementing privacy settings for the user, but does not explicitly disclose implementing, by the AR/VR engine, an off-grid mode for the user, wherein in the off-grid mode the user searches for a specified service and interacts with a service provider via the AR/VR billboard in a field of view of the user while remaining private.
Tung also teaches a) setting a geo-fence of AR/VR content, b) detecting that a user with the user-side AR/VR device has entered the geo-fence, c) enabling the user to toggle a user mode, and d) implementing privacy settings for the user, and Tung further discloses
● implementing, by the AR/VR engine, an off-grid mode for the user, wherein in the off-grid mode the user searches for a specified service and interacts with a service provider via the AR/VR billboard in a field of view of the user while remaining private (see at least Tung ¶0031 “privacy setting of a user determines how particular information associated with a user can be shared. The authorization server may allow users to opt in to or opt out of having their actions logged … or shared with other systems ( e.g., third-party system 170), such as, for example, by setting appropriate privacy settings,” ¶0052 “Social-networking system 160 may determine that a user's location should be used as the content location based on privacy settings or user preferences”);
Therefore it would have been obvious to one of ordinary skill in the art at the time of invention (for pre-AIA applications) or filing (for applications filed under the AIA ) to modify the method of Rathod to include implementing, by the AR/VR engine, an off-grid mode for the user, wherein in the off-grid mode the user searches for a specified service and interacts with a service provider via the AR/VR billboard in a field of view of the user while remaining private, as taught by Tung since the claimed invention is merely a combination of old elements and in the combination each element merely would have performed the same function as it did separately. One of ordinary skill in the art would have recognized that the results of the combination were predictable and would result in an improvement. This is because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such features even from a variety of technical fields into methods and systems implemented using similar technological structures (i.e., generic computer and/or network hardware such as processors, servers, etc.). In this case the areas of technical endeavor are nonetheless similar and overlapping.
Applicant has not disclosed that the added feature solves any stated problem or is for any particular purpose beyond the performance of the functions they performed separately and since each element and its function are shown in the prior art the difference between the claimed subject matter and the prior art rests not on any individual element or function but in the very combination itself. It would therefore have been an obvious matter of design choice to include the feature from Tung in the method of Rathod. Furthermore the combination solved no long felt need. Incorporating cumulative known features is additionally obvious to one of ordinary skill in the art because doing so increases commercial use of a method by attracting users that previously might have chosen between one of the previously known methods.
Rathod in view of Tung teaches
● determining, by the AR/VR engine, that the user is in an on-grid mode (see at least Rathod abstract “camera grid computing for utilizing cameras of smartphone devices of users of networks including in-vehicle cameras,” p.1:15-20 “creating camera grid computing for utilizing cameras of smartphone devices of users of networks including in-vehicle cameras,” p.20:20-30 “types of activities, actions, and reactions including like or liked, visited, checked-in, purchased, ordered, booked, displayed, viewed, viewing, reading or read and eating one or more types of products, food items and services, interactions, status, transactions related to/with one or more products, persons and entities and comprises one or more types of details including product name, brand, price, person name, designation, role and place of business name, logo, brand,” p.123 “utilizing of movable mobile camera grid computing for variety of purposes. At present driver of vehicle utilizes map application at the time of driving of vehicle”);
● determining, by the AR/VR engine using gaze-tracking input of the user-side AR/VR device, a region of gaze of the user (see at least Rathod p.3:15-20 “eye expression and movement based on eye tracking system,” p.9:4-27 “providing per-defined one or more types of senses or senor data including movements of user device at particular direction and particular angels ... and eye or body parts or expression commands” p.57:26-p.58:10 whole paragraph describes tracking eye movements such as direction of glances, “eye-based gesture control,” etc..);
● predicting, by a machine-learning module of the AR/VR engine, a future location of the user based on a current location of the user, a current travel direction of the user, and historical route data of the user, and pre-generating, by the AR/VR engine, AR/VR elements for display in the predicted future location of the user (see at least Tung ¶0085 “predicted probability that a user will perform a particular action based on the user's interest in the action. In this way, a user's future actions may be predicted based on the user's prior actions, where the coefficient may be calculated at least in part a the history of the user's actions. … these actions may include various types of communications, such as sending messages, posting content, or commenting on content; various types of a observation actions, such as accessing or viewing profile pages, media, or other suitable content,” ¶0086 “machine-learning algorithms trained on historical actions and past user responses, or data farmed from users by exposing them to various options and measuring responses,” ¶0090 “Coefficients may be used to predict whether a user will perform a particular action based on the user's interest in the action. A coefficient may be used when generating or presenting any type of objects to a user, such as advertisements, search results, news stories, media, messages, notifications, or other suitable objects”);
● dynamically updating, by the AR/VR engine, a size of the geo-fence based on contextual factors including at least one of a time of day or a user attribute, wherein when the user is detected as a returning customer the geo-fence is increased to a greater size than when the user is detected as commuting by a business associated with the AR/VR billboard (see at least Rathod p.2 “one or more types of users including past and current visitors, customers, prospective customers, viewers, listeners, eaters, patients, members, clients, employees, associates, partners, connected users, commuters, travelers, service subscribers, guests who interacted with said augmented reality scanning,” p.5:20-22, 29-35, p.8:10-20, p.9:5-27, p.21 “information of location, place of business, pre-defined one or more geofence or range surround particular location or place, schedules or date and time, duration, rules, target user profiles or models, user characteristics and attributes including age, gender, language, interest, skill, education, positions, income range, home or work address or location information, in view of Tung ¶0056 “may use machine learning heuristics to calculate a geo-fence radius for a particular set of circumstances,” ¶0060 “system 160 may use machine learning methods to adjust suggested geo-fenced areas to the user, with the additional factor of whether the user is currently travelling”);
● communicating, by the AR/VR engine via a network interface, the AR/VR billboard content to the user-side AR/VR device (see at least Rathod p.2:25-35 “send message or make call or video call or follow or subscribe posted contents or send invitation to connect to one or more types of users including past and current visitors, customers, prospective customers,” p.3:5-15 “sending one or more images, photos, videos, voice and automatically identified one or more types of data and metadata, conducting one or more types of actions, interactions with user device,” p.10 “advertise or broadcast automatically generated post or message based on scanned object data to followers and one or more types of destinations”);
● displaying, by the user-side AR/VR device on a display subsystem, the AR/VR billboard content (see at least Rathod p.2:20-25 “displaying recognized objects and associated information or one or more types of entities including brand, shop, company, service provider, staff, seller specific contextual or user relevant social interactions,” p.8:20-p.9:3 “advertise or broadcast automatically generated post or message based on scanned object data to followers and one or more types of destinations, ... participate in deal, get offer or claim offer, share contact details, connect, follow or subscribe recognized and identified scanned object or product or person or associated entity or place of business or brand or company ... scanned object or scanned object associated with particular location or location associated place or place of business”);
● detecting, by the AR/VR engine, a user interaction with the AR/VR billboard content, the user interaction comprising at least one of a voice query detected by a microphone of the user-side AR/VR device, a text query entered through the user-side AR/VR device, or a gesture detected by the user-side AR/VR device (see at least Rathod abstract “applying one or more types of gestures, multi-touch and movement of user device,” p.2 “voice command,” p.8:1-5 “text messaging,” p.60:1-12. Please note: The phrase "at least one of" precedes the recitation of alternative or optional limitations only one of which is required. Language claiming elements in the alternative is anticipated by the presence of any single alternative. Beyond that it does not result in any further limitation because it merely represents contingencies that are not required.
Applicant is reminded that optional or conditional elements do not narrow the claims because they can always be omitted. See e.g. MPEP §2111.04 "Claim scope is not limited by claim language that suggests or makes optional but does not require steps to be performed, or by claim language that does not limit a claim to a particular structure."; and In re Johnston, 435 F.3d 1381,77 USPQ2d 1788, 1790 (Fed. Cir. 2006) ("As a matter of linguistic precision, optional elements do not narrow the claim because they can always be omitted.")); and
● implementing, by the AR/VR engine, a specified action based on the user interaction, wherein the specified action comprises at least one of: making a reservation for the user, forwarding a customer query of the user to a customer service representative, initiating an e-commerce transaction or a charging operation, ordering an item for delivery to the user, updating a customer profile associated with the user, sending a reservation reminder to the user, or following up with the customer service representative that a sale item has been prepared for the user to view or purchase (see at least Rathod p.2:1-15, p.9:29-p.10:10 “actions or call-to-actions or instructions or commands comprises make order, buy, book, purchase, make payment,… participate in or register with event, participate in deal, get offer or claim offer” p.21:10-15 “privacy settings and preferences of user, target criteria … target user profiles,” p.29:15-25 “selected or identified one or more contacts including profile photo, name, unique identity and other information, initiating transaction, sending of or successfully send or unsuccessful in sending of particular amount of money to particular person or account of particular or place of business,” p.41:10-20 “automatically captured photos or record video from camera, applied gestures, multi-touches, provide voice or eye or body commands, captured photos or recorded videos, … wherein user data comprises user profile, … activities, actions, events, senses, transactions, status, updates,” p.54:1-10, p.55:1-3 “get appointment, book order,” p.67:15-22. Please note: see the previous comment concerning the recitation of alternative or optional limitations preceded by "at least one of".);
● detecting, by the AR/VR engine using location data of the user-side AR/VR device, that the user has exited the geo-fence, and in response, stopping access to the AR/VR billboard content in the user-side AR/VR device (see at least Tung abstract “If the second user moves to a location outside the geo-fenced area, determination is made of whether the sharing of the content item should be terminated,” ¶0006 “When an associated user's updated location is determined to be outside the geo-fenced area, the social-networking system may determine if the shared content item should subsequently be expired. Upon determining that the shared content item should be expired, the social-networking system may expire the sharing of the content”).
Claim 20. The computerized method of claim 19, wherein the AR/VR billboard content comprises at least one of a digital advertisement, a digital entertainment, a digital social media content, or a chatbot-generated content (see at least Rathod p.7:25-34, p.39:30-35, p.92:4-10. Please note: see previous comments regarding claim language including optional or alternative limitations.).
Claim 21. The computerized method of claim 19, wherein a business locates the AR/VR billboard at a front of a physical retail store (see at least Rathod p.59:20-35 “displaying called user photo, name and other details and n-store location information or location identity to enable salesman to identify location, reach there and identify customer who called salesman,” p.82:14-35, p.99, p.114:1-11. Please note: the description includes in-store devices and interactions, this includes a front of store location.).
Claim 22. The computerized method of claim 19, wherein dynamically adjusting the size of the geo-fence comprises varying the size in real time based on temporal factors including a time of day and based on user-specific attributes detected by the user tracking module (see at least Rathod p.21 “information of location, place of business, pre-defined one or more geofence or range surround particular location or place, schedules or date and time, duration, rules, target user profiles or models, user characteristics and attributes including age, gender, language, interest, skill, education, positions, income range, home or work address or location information, in view of Tung ¶0056 “may use machine learning heuristics to calculate a geo-fence radius for a particular set of circumstances…. system 160 may use a default setting to automatically use the suggested geo-fence radius without explicit user approval,” ¶0057 “system 160 may consider previous shared content items and their associated geo-fence radii, based on the intended recipients, time of sharing or time that the content was created, …. system 160 may consider if the sharing user is traveling, and use a different geo-fence radius for a traveling user as opposed to a user who is at or near his home,” ¶0060 “system 160 may use machine learning methods to adjust suggested geo-fenced areas to the user, with the additional factor of whether the user is currently travelling”).
Claim 23. The computerized method of claim 19, wherein detecting that the user is a returning customer is performed by referencing stored historical interaction data, and wherein the geo-fence is enlarged relative to when the user is detected to be commuting by the business based on travel direction and lack of prior interaction (see at least Rathod p.2 “one or more types of users including past and current visitors, customers, prospective customers, viewers, listeners, eaters, patients, members, clients, employees, associates, partners, connected users, commuters, travelers, service subscribers, guests who interacted with said augmented reality scanning,” p.21 “information of location, place of business, pre-defined one or more geofence or range surround particular location or place, schedules or date and time, duration, rules, target user profiles or models, user characteristics and attributes including age, gender, language, interest, skill, education, positions, income range, home or work address or location information,” in view of Tung ¶0056 “may use machine learning heuristics to calculate a geo-fence radius for a particular set of circumstances,” ¶0057 “system 160 may consider if the sharing user is traveling, and use a different geo-fence radius for a traveling user as opposed to a user who is at or near his home,” ¶0060, ¶0085 “coefficient may be calculated at least in part a the history of the user's actions”).
Claim 24. The computerized method of claim 19, wherein detecting the user interaction with the AR/VR billboard content comprises at least one of a voice query, a text query, or a gesture detected by the user AR/VR device (see at least Rathod abstract “applying one or more types of gestures, multi-touch and movement of user device,” p.2 “voice command,” p.8:1-5 “text messaging”. Please note: see previous comments concerning alternative and/or optional limitations.).
Claim 25. The computerized method of claim 19, wherein implementing the specified action comprises at least one of providing the user an incentive to purchase an item, making a reservation, initiating a chatbot session, scheduling an appointment, ordering an item for delivery, updating a user profile, or sending a reservation reminder (see at least Rathod p.2:1-15, p.21:10-15, p.54:1-10, p.55:1-3, p.67:15-22. Please note: see previous comments concerning alternative and/or optional limitations.).
Claim 26. The computerized method of claim 19 further comprising: ● detecting, by the AR/VR engine, that the user AR/VR device has left the geo-fence (see at least Tung abstract “If the second user moves to a location outside the geo-fenced area, determination is made of whether the sharing of the content item should be terminated,” ¶0006 “When an associated user's updated location is determined to be outside the geo-fenced area, the social-networking system may determine if the shared content item should subsequently be expired. Upon determining that the shared content item should be expired, the social-networking system may expire the sharing of the content”).
Claim 27. The computerized method of claim 19 further comprising: ● stopping, by the AR/VR engine, access to the AR/VR billboard content in the user AR/VR device responsive to detecting that the user AR/VR device has left the geo-fence (see at least Tung abstract “If the second user moves to a location outside the geo-fenced area, determination is made of whether the sharing of the content item should be terminated,” ¶0006 “When an associated user's updated location is determined to be outside the geo-fenced area, the social-networking system may determine if the shared content item should subsequently be expired. Upon determining that the shared content item should be expired, the social-networking system may expire the sharing of the content”).Claim 28. The computerized method of claim 19, wherein implementing the on-grid mode comprises enabling the user to search for broadcasts initiated by other entities in the AR/VR system and making the user visible in broadcast searches of the other entities (see at least Rathod abstract “camera grid computing for utilizing cameras of smartphone devices of users of networks including in-vehicle cameras,” p.1:15-20 “creating camera grid computing for utilizing cameras of smartphone devices of users of networks including in-vehicle cameras,” p.8:20-35, p.20:20-30 “types of activities, actions, and reactions including like or liked, visited, checked-in, purchased, ordered, booked, displayed, viewed, viewing, reading or read and eating one or more types of products, food items and services, interactions, status, transactions related to/with one or more products, persons and entities and comprises one or more types of details including product name, brand, price, person name, designation, role and place of business name, logo, brand,” p.123 “utilizing of movable mobile camera grid computing for variety of purposes. At present driver of vehicle utilizes map application at the time of driving of vehicle”).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
● deCharms, Patent No.: US 9,014,661 B2: teaches geo-fencing and augmented reality display in a personal security context, including consideration of user privacy. "User privacy is maintained with access to a user's personal data and device access being restricted to only those users or user roles (e.g., fireman, police officer) that the user has explicitly designated." c56:1-14.
● Berquam et al., Paper No. 20250314; Pub. No.: US 2020/0249819 A1: teaches display of AR information to a user device based on geofence and includes privacy modes. See ¶¶0102, 0142.
● Abdelkader, Patent No.: US 11,120,422 B2: teaches notifications provided to user upon entry to geo-fenced area based on business interests and AI/ML derived user preference information (e.g., via one or more machine learning models, such as k-means clustering algorithms), used to notify users likely to be interested in a deal. User information, such as their name, may optionally be hidden from the entity. User preference information may be presented, such that the entity may solely notify third-party users who are likely to be interested in a deal. c7:25-50.
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ADAM LEVINE whose telephone number is (571)272-8122. The examiner can normally be reached Monday - Thursday 9am-7:30pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at 571.272.6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ADAM L LEVINE/Primary Examiner, Art Unit 3689 December 22, 2025