Prosecution Insights
Last updated: April 19, 2026
Application No. 18/558,936

PROCESSING DEVICE, PROCESSING METHOD, AND PROCESSING PROGRAM

Non-Final OA §101§102§103
Filed
Nov 03, 2023
Examiner
THEIN, MARIA TERESA T
Art Unit
3689
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
NTT Qonoq Inc.
OA Round
1 (Non-Final)
28%
Grant Probability
At Risk
1-2
OA Rounds
5y 4m
To Grant
60%
With Interview

Examiner Intelligence

Grants only 28% of cases
28%
Career Allow Rate
62 granted / 219 resolved
-23.7% vs TC avg
Strong +31% interview lift
Without
With
+31.2%
Interview Lift
resolved cases with interview
Typical timeline
5y 4m
Avg Prosecution
8 currently pending
Career history
227
Total Applications
across all art units

Statute-Specific Performance

§101
25.5%
-14.5% vs TC avg
§103
45.7%
+5.7% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
16.8%
-23.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 219 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Status of Claims The following is an office action in response to the communication filed 11/03/2023. Claims 1-8 are currently pending and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement (IDS) submitted on 11/3/2023 and 2/7/2024 were received and considered. The submission is in compliance with the provisions of 37 CFR 1.97. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “section…” (claims 1-6) with the functional language “that constructs”, “that acquires” and “that causes”, which are not preceded by a structural modifier. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. In the specification (para. 29), the sections are interpreted as software. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-8 rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception without significantly more. The claims recite an abstract idea. This judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. First, it is determined whether the claims are directed to a statutory category of invention. See MPEP 2106.03(II). In the instant case, claims 1-6 are directed to a machine , claim 7 are directed to a process, and claim 8 is directed to a manufacture. Therefore, claims 1-8 are directed to statutory subject matter under Step 1 of the Alice/Mayo test (Step 1: YES). The claims are then analyzed to determine if the claims are directed to a judicial exception. See MPEP 2106.04. In determining whether the claims are directed to a judicial exception, the claims are analyzed to evaluate whether the claims recite a judicial exception (Prong 1 of Step 2A), as well as analyzed to evaluate whether the claims recite additional elements that integrate the judicial exception into a practical application of the judicial exception (Prong 2 of Step 2A). See MPEP 2106.04. Taking claim 1 as representative, claim 1 recites at least the following limitations that are believed to recite an abstract idea: constructs a shopping mall having stores where a user can browse and purchase products in a space and provides the shopping mall to the user; acquires image information which is obtained by monitoring a physical store that actually sells products displayed in a store, and processes the image information to acquire customer behavior information of the physically store; and causes each user to reflect the customer behavior information of the physical store into the store or the shopping mall. The above limitations recite the concept of monitoring user’s behavior to browse and purchase products and creating a representation of the customer in the physical store based on customers’ behavior information. These limitations, under their broadest reasonable interpretation, fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in the MPEP, in that they recite commercial or legal interactions such as advertising, marketing, or sales activities or behaviors. Specifically, the invention relates to marketing and sales activities. This is illustrated in [0008] of the Specification, describing the invention as user’s willingness to purchase products. Furthermore, the limitations, under their broadest reasonable interpretation, fall within the “Mental Processes” grouping of abstract ideas, enumerated in the MPEP, in that they recite concepts performed in the human mind, such as observations, evaluations, judgements, and opinions. Specifically, the limitations recite concepts similar to collecting, acquiring and reflecting information can be performed in the human mind or by pen and paper. Independent claims 7 and 8 recite similar concepts as those recited in claim 1 and accordingly fall within the same groupings of abstract ideas. Accordingly, under Prong One of Step 2A of the MPEP, claims 1, 7, and 8 recite an abstract idea (Step 2A, Prong One: YES). Under Prong Two of Step 2A of the MPEP, claims 1, 7 and 8 recite additional elements, such as a processing device, a construction section, a virtual shopping mall, a virtual store, an imaging section, a virtual space, a reflection section, and a user terminal. These additional elements are described at a high level in Applicant’s specification without any meaningful detail about their structure or configuration. As such, these computer-related limitations are not found to be sufficient to integrate the abstract idea into a practical application. Although these additional computer-related elements are recited, claims 1, 7 and 8 merely invoke such additional elements as a tool to perform the abstract idea. Implementing an abstract idea on a generic computer is not indicative of integration into a practical application. Similar to the limitations of Alice, claims 1, 7, and 8 merely recite a commonplace business method (i.e., monitoring user’s behavior to browse and purchase products and creating a representation of the customer in the physical store based on customers’ behavior information) being applied on a general purpose computer. See MPEP 2106.05(f). Furthermore, claims 1, 7 and 8 generally link the use of the abstract idea to a particular technological environment or field of use. The courts have identified various examples of limitations as merely indicating a field of use/technological environment in which to apply the abstract idea, such as specifying that the abstract idea of monitoring audit log data relates to transactions or activities that are executed in a computer environment, because this requirement merely limits the claims to the computer field, i.e., to execution on a generic computer (see FairWarning v. Iatric Sys.). Likewise, claims 1, 7, and 8 specifying that the abstract idea of monitoring user’s behavior to browse and purchase products and creating a representation of the customer in the physical store based on customers’ behavior information is executed in a computer environment merely indicates a field of use in which to apply the abstract idea because this requirement merely limits the claims to the computer field, i.e., to execution on a generic computer. As such, under Prong Two of Step 2A of the MPEP, when considered both individually and as a whole, the limitations of claims 1, 7 and 8 are not indicative of integration into a practical application (Step 2A, Prong Two: NO). Since claims 1, 7 and 8 recite an abstract idea and fail to integrate the abstract idea into a practical application, claims 1, 7 and 8 are “directed to” an abstract idea (Step 2A: YES). Next, under Step 2B, the claims are analyzed to determine if there are additional claim limitations that individually, or as an ordered combination, ensure that the claim amounts to significantly more than the abstract idea. See MPEP 2106.05. The instant claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception for at least the following reasons. Returning to independent claims 1, 7, and 8, these claims recite additional elements, such as a processing device, a construction section, a virtual shopping mall, a virtual store, an imaging section, a virtual space, a reflection section, and a user terminal. As discussed above with respect to Prong Two of Step 2A, although additional computer-related elements are recited, the claims merely invoke such additional elements as a tool to perform the abstract idea. See MPEP 2106.05(f). Moreover, the limitations of claims 1, 7, and 8 are manual processes, (e.g., collecting information, acquiring information, etc.). The courts have indicated that mere automation of manual processes is not sufficient to show an improvement in computer-functionality (see MPEP 2106.05(a)(I)). Furthermore, as discussed above with respect to Prong Two of Step 2A, claims 1, 7, and 8 merely recite the additional elements in order to further define the field of use of the abstract idea, therein attempting to generally link the use of the abstract idea to a particular technological environment, such as the Internet or computing networks (see Ultramercial, Inc. v. Hulu, LLC. (Fed. Cir. 2014); Bilski v. Kappos (2010); MPEP 2106.05(h)). Similar to FairWarning v. Iatric Sys., claims 1, 7 and 8 specifying that the abstract idea of monitoring user’s behavior to browse and purchase products and creating a representation of the customer in the physical store based on customers’ behavior information is executed in a computer environment merely indicates a field of use in which to apply the abstract idea because this requirement merely limits the claim to the computer field, i.e., to execution on a generic computer. Even when considered as an ordered combination, the additional elements do not add anything that is not already present when they are considered individually. In Alice Corp., the Court considered the additional elements “as an ordered combination,” and determined that “the computer components…‘[a]dd nothing…that is not already present when the steps are considered separately’ and simply recite intermediated settlement as performed by a generic computer.” Id. (citing Mayo, 566 U.S. at 79, 101 USPQ2d at 1972). Similarly, viewed as a whole, claims 1, 7 and 8 simply convey the abstract idea itself facilitated by generic computing components. Therefore, under Step 2B of the Alice/Mayo test, there are no meaningful limitations in claims 1, 7 and 8 that transform the judicial exception into a patent eligible application such that the claims amount to significantly more than the judicial exception itself (Step 2B: NO). Dependent claims 2-6, when analyzed as a whole, are held to be patent ineligible under 35 U.S.C. 101 because they do not add “significantly more” to the abstract idea. Dependent claims 2-6 further fall within the “Certain Methods of Organizing Human Activity” grouping of abstract ideas, enumerated in the MPEP, in that they recite commercial or legal interactions such as advertising, marketing, or sales activities or behaviors and managing personal behavior or relationships or interactions between people. Additionally, the claims further fall within the “Mental Processes” grouping of abstract ideas, enumerated in the MPEP, in that they recite concepts performed in the human mind, such as observations, evaluations, judgements, and opinions. Dependent claims 2-6 fail to identify additional elements and as such, are not indicative of integration into a practical application. As such, under Step 2A, dependent claims 2-6 are “directed to” an abstract idea. Similar to the discussion above with respect to claims 1, 7 and 8, dependent claims 2-6, analyzed individually and as an ordered combination, merely further define the commonplace business method (i.e., presenting an item based on an aggregated score determined from testing a new feature) being applied on a general purpose computer and, therefore, do not amount to significantly more than the abstract idea itself. See MPEP 2106.05(f)(2). Further, these limitations generally link the use of the abstract idea to a particular technological environment or field of use. Accordingly, under the Alice/Mayo test, claims 1-8 are ineligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 5, and 7-8 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by U.S. Patent Publication No. 2019/1017935 to Spivack, hereinafter Spivak. Regarding claim 1, Spivack discloses a processing device, comprising: a construction section that constructs a virtual shopping mall having virtual stores where a user can browse and purchase products in a virtual space and provides the virtual shopping mall to a user terminal used by the user (creating an alternate reality environment/augmented reality (AR) environment, (para. 1043); The alternate reality environment is of a physical location representing a real environment associated with a physical location and/or a virtual object and include scenes such as photograph or image or the real environment, a recorded video of the real environment, a live video or live stream of the real environment,(para. 1043); the host server and/or the commerce/marketplace engine can create a Marketplace for buying virtual goods to construct and customize products, (para. 648); see also paras. 60-61) ; an image processing section that acquires image information, which is obtained by monitoring a physical store that actually sells products displayed in a virtual store, and processes the image information to acquire customer behavior information of the physical store (volumetric video capture for enrichment of places depicted in the AR environment. For example, multiple cameras are placed around physical locations and those locations are monitored or constantly rendered in video from every perspective and can be viewed (para. 195); Rear camera interactions – Users center a VOB in the center of to highlight/trigger an event (para. 398-399); Front camera interactions (para. 404) ; the commerce/marketplace engine provides or facilitates a general shopping experience in the AR environment (para. 705); users can create virtual objects (VOBs) for their items/services with photos videos, potential buyers interact with item (para. 709); the scene of the real environment includes one or more of a photograph or image of the real environment, a photorealistic production or illustration of the real environment, a recorded a live video or live stream of the real environment, (para. 1044); and a reflection processing section that causes each user terminal to reflect the customer behavior information of the physical store into the virtual store or the virtual shopping mall (the VOB management can assign custom behaviors to VOBs that enable them to do more complex activities or support other types by users (para. 356); The VOB management engine can configure the VOBs to behave in an autonomous manner such as: chasing users, running away from users, hiding, doing things to other objects (para. 357-362); behaviors can specify rules about what an object does in various circumstances such as “if user comes from place x then do y”, “if a user has <permission/qualifications> then do x”, “If user has objects <set> and <something else is true> then do (para. 363-367); Other possible custom behaviors, react to user, talk to the user, play with the user, fight with use, and move around when user does x (para. 374-375)l the VOB management engine can configure VOBs to react or act in response to gesture combinations (para. 379)). Regarding claim 5, Spivak discloses the processing device according to claim 1. Furthermore, Spivak discloses wherein the image processing section acquires video information of the physical store as the behavior information (multiple cameras can be placed around physical locations and those locations can be monitored or constantly rendered in video from every perspective and can be viewed and participated by non-local users as well as local users, (para. 195)); and the reflection processing section causes each user terminal to display the video information in an advertisement area arranged in the virtual store (the engine can further enable advertisers to create their own layers (para. 705); the advertising engine implements and devices advertising as one of the monetization models. The advertising engine in general enables users, groups, brands, merchants, companies to promote any VBO they have permissions to promote (para. 221) the incentive management engine creates rewarding experiences for advertisers so as to implements strategies to drive consumers to carry out behaviors of value (para. 285)). Regarding claim 7, the limitation in method claim 7 is closely parallel to the limitation of device claim 1 analyzed above and rejected on the same bases. Regarding claim 8, the limitation in program claim 8 is closely parallel to the limitation of device claim 1 analyzed above and rejected on the same bases. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2-3 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2019/1017935 to Spivack, hereinafter Spivak, in view of U.S. Patent Application Publication No. 2022/0092134 to Cypher, hereinafter Cypher. Regarding claim 2, Spivak discloses the processing device according to claim 1. Spivak further discloses wherein the image processing section acquires, as the behavior information, a product browsing behavior of a customer in the physical store and for a first product that has been browsed or tried in the physical store among products arranged in the virtual store (generate a mixed realty environment associated with a physical location. A human user can be enabled to discover relevant objects in the other reality through physical exploration of the physical location and areas surrounding the physical location, (abstract); multiple cameras are placed around physical locations and those locations are monitored or constantly rendered in video from every perspective and can be viewed (para. 195); user interface depicts an alternate reality environment including a scene of an actual Louis Vuitton store. The multiple virtual objects in the alternate reality environment can include incentive objects such as the virtual reward object, which may be a location based brand or merchant sponsored reward made available, visible or accessible to users when they are in the vicinity of the physical location. The user interface for the alternate reality environment includes a radar depicted graphically. The radar can indicate object proximity to a human user in a set or predetermined or configurable distances (e.g. in-reach, near, etc.) (para. 92)), and the reflection processing section causes each user terminal to display information indicating that the product has been browsed or tried on by customers (user interface depicts an alternate reality environment including a scene of an actual Louis Vuitton store. The multiple virtual objects in the alternate reality environment can include incentive objects such as the virtual reward object, which may be a location based brand or merchant sponsored reward made available, visible or accessible to users when they are in the vicinity of the physical location. The user interface for the alternate reality environment includes a radar depicted graphically. The radar can indicate object proximity to a human user in a set or predetermined or configurable distances (e.g. in-reach, near, etc.) (para. 92); users creating VOBs for their items with photos, videos, for potential buyers to interact with items with the ability to wear the item (para. 705)). However, Spivak does not disclose a product trying-on behavior of a customer in the physical store and for a first product that has been browsed or tried on a predetermined number of times or more by customers in the physical store. Spivak discloses the generation of a mixed realty environment associated with a physical location. A human user can be enabled to discover relevant objects in the other reality through physical exploration of the physical location and areas surrounding the physical location (abstract). The alternate reality environment comprises a commerce environment, where the commerce environment can enable the human user to carry out a transaction with respect to another entity in relation to a virtual object. The virtual object in the commerce environment can represent a physical good in a physical location. (para. 1103) Furthermore, Spivak discloses users creating VOBs for their items with photos, videos, for potential buyers to interact with items with the ability to wear the item (para. 705). Cypher, on the other hand, teaches a product trying-on behavior of a customer in the physical store (trying on a garment in a fitting room of the retail store (para. 130); the feedback module receives feedback information directly from individuals from the in-store actions of the individual (para. 93)); and for a first product that has been browsed or tried on a predetermined number of times or more by customers in the physical store (feedback modules track the amount of time an individual wears a particular garment (e.g. while trying the item on in a fitting room) (para. 94); the feedback information tracks the percentage of individuals that try on a particular item and track the average time all individuals try on a particular garment (para. 95)); and It would have been obvious to one of ordinary skill in the art at the time the invention was filed to have included a product trying-on behavior of a customer in the physical store; for a first product that has been browsed or tried on a predetermined number of times or more by customers in the physical store of Cypher, in the device of Spivack, in order to determine real-time, localized, and segmented feedback about how specific items are performing (Cypher, para. 96). Regarding claim 3, Spivack discloses the processing device according to claim 2. Spivack further discloses wherein, for the first product, the reflection processing section displays a comment (the host server provides the ability to access, perceive, hear, see or interact with VOBs in the environment (para. 318); The interaction manager enables VOBs to support actions or interaction including (para. 318) annotation (para. 343) such as commenting (para. 344); the annotation action can be initiated in response to an annotation action by a human user (para. 1089). Spivak discloses a client application component which provides geo-contextual awareness to human users of the AR environment and platform. The client application can sense, detect or recognize virtual objects and/or other human users, of any other human or computer participants that are within the range of their physical location and can enable the user to observe, view, act, interact, react with respect to the VOBs. (para. 62) In addition, Spivack discloses an activity management that includes an interaction manager and a VOB management engine (para. 317). The host server provides, for example, via VOB management engine of the activity engine the ability to access, perceive, hear, see or interact with VOBs in the environment. The activity management engine can also enable , facilitate or govern the use of human gestures motions or motions detected at a device used to access the AR environment to interact with VOBs. The activity management engine can facilitate or enable VOBs to passively sense, and actively interact with other VOBs or human users in the AR environment. (para 318) The interaction manager includes annotation such as commenting (para. 318; para. 343-344). Moreover, users can create VOBs for their items with videos so that potential buyers interact with the items such as wearing the item (para 709). However, Spivak does not disclose a comment indicating that the first product has been browsed or tried on by customers, highlights the first product, or changes a display position of the first product onto a line of sight of the user. Cypher, on the other hand, teaches a comment indicating that the first product has been browsed or tried on by customers, highlights the first product, or changes a display position of the first product onto a line of sight of the user (the individual may receive a message which includes a list of all the items the consumer tried (para. 34); The application provide feedback about items (para. 48). The method provides real-time feedback from a target audience regarding an item being tried on by an individual. (para.129)). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to have included the comment indicating that the first product has been browsed or tried on by customers of Cypher, in the device of Spivack, in order to determine real-time, localized, and segmented feedback about how specific items are performing (Cypher, para. 96). Claims 4 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication No. 2019/1017935 to Spivack, hereinafter Spivack, in view of U.S. Patent Application Publication No. 2021/036596 to Marshall, hereinafter Marshall. Regarding claim 4, Spivack discloses the processing device according to claim 1. Spivack further discloses the imaging section (multiple cameras can be placed around physical locations and those locations can be monitored or constantly rendered in video from every perspective and can be viewed and participated by non-local users as well as local users, (para. 195)), and the reflection processing section (interaction manager can manage, control, determine, facilitate interaction of human users with VOBs, things places, objects. Such interactions can occur virtually, in real world with a real work effect/outcome and digital/virtual effect the AR environment with real work results, outcome, use/and effect (para. 394)). Spivack does not disclose a degree of congestion of customers in the physical store as the behavior information, and to display silhouette images, the number of which corresponds to the degree of congestion, in the virtual store. Spivak discloses a view selector which enables activity depicted or presented in the AR environment via the client device to be perceived, viewed and/or accessed in a place in a number of ways (para. 781). A map view can show indications of crowds, live activity levels or popularity of the AR environment in various places, numbers of VOBs etc. These can be summarized with symbols or colors or animations, for example, to indicate that there is more happening in certain places (para. 782 and para. 784). The identifiers of VOBs that cannot appear “dangerous” places such as in identifying dangerous locations or traffic to warn users or to prevent users from getting harmed, such dangerous areas can be marked with a different color in the maps view (para. 785). Spivack, further, discloses visible objects and avatars in a named place or within a certain distance from the user’s device (para. 787). It can indicate that objects and avatars are moving or changing (para. 788). It can indicate some kind of “heat” for objects that are more or less popular (para. 789). Spivack also discloses changes over time such as moving from one location to another so as to go where there are more people or go away from crowds of people (para. 376-377). Marshall, on the other hand, teaches a degree of congestion of customers in the physical store as the behavior information, and to display silhouette images, the number of which corresponds to the degree of congestion, in the virtual store (traffic density is measured by one or more sensors; the measurements include spatial information that indicates the presence of customers in the region. (para. 3); the merchant commerce facilities may be incorporated into the e-commerce platform such as where POS devices in a physical store of a merchant are linked into the e-commerce platform, where a merchant off-platform website is tied in the e-commerce platform (para. 32); The system is not limited to use with physical retail stores and can also be implemented for virtual products and/or virtual retail stores that are provided using augmented reality or virtual reality (para. 83); analysis of the measurements in the measurement record can detect the presence of a customer in a region of the retail store. For example, the presence of a customer could be determined based on the shape of a cluster of spatial data points in the region. The presence and location of the entities in the retail store help determine traffic densities for the retail store. (para. 141)). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to have included the degree of congestion of customers in the physical store as the behavior information, and to display silhouette images, the number of which corresponds to the degree of congestion, in the virtual store of Marshall, in the device of Spivack, in order to determine the value of display space in a manner that is reliable, accurate and unbiased (Marshall, para. 3), so as for the merchant to benefit from any sales of the product occurring through the retail store (Marshall, para. 2). Regarding claim 6, Spivack discloses the processing device according to claim 1. Spivack discloses wherein the image processing section acquires, as the behavior (multiple cameras can be placed around physical locations and those locations can be monitored or constantly rendered in video from every perspective and can be viewed and participated by non-local users as well as local users, (para. 195)); and the reflection processing section (interaction manager can manage, control, determine, facilitate interaction of human users with VOBs, things places, objects. Such interactions can occur virtually, in real world with a real work effect/outcome and digital/virtual effect the AR environment with real work results, outcome, use/and effect (para. 394)). However, Spivack does not disclose a degree of congestion of customers in each physical store corresponding to each virtual store opened in the virtual shopping mall, and causing each user terminal to highlight each virtual store in the virtual shopping mall according to the degree of congestion of each corresponding physical store. Spivack does not disclose a degree of congestion of customers in the physical store as the behavior information, and to display silhouette images, the number of which corresponds to the degree of congestion, in the virtual store. Spivack discloses a view selector which enables activity depicted or presented in the AR environment via the client device to be perceived, viewed and/or accessed in a place in a number of ways (para. 781). A map view can show indications of crowds, live activity levels or popularity of the AR environment in various places, numbers of VOBs etc. These can be summarized with symbols or colors or animations, for example, to indicate that there is more happening in certain places (para. 782 and para. 784). The identifiers of VOBs that cannot appear “dangerous” places such as in identifying dangerous locations or traffic to warn users or to prevent users from getting harmed, such dangerous areas can be marked with a different color in the maps view (para. 785). Spivak, further, discloses visible objects and avatars in a named place or within a certain distance from the user’s device (para. 787). It can indicate that objects and avatars are moving or changing (para. 788). It can indicate some kind of “heat” for objects that are more or less popular (para. 789). Spivak also discloses changes over time such as moving from one location to another so as to go where there are more people or go away from crowds of people (para. 376-377). Marshall, on the other hand, teaches a degree of congestion of customers in each physical store corresponding to each virtual store opened in the virtual shopping mall store (traffic density is measured by one or more sensors; the measurements include spatial information that indicates the presence of customers in the region. (para. 3); the merchant commerce facilities may be incorporated into the e-commerce platform such as where POS devices in a physical store of a merchant are linked into the e-commerce platform, where a merchant off-platform website is tied in the e-commerce platform (para. 32); The system is not limited to use with physical retail stores and can also be implemented for virtual products and/or virtual retail stores that are provided using augmented reality or virtual reality (para. 83);a merchant may be more than individuals, the description herein may generally refer to merchants as such. All references to merchants throughout the disclosure should also be understood to be references to groups of individuals, companies, corporations, computing entities and the like (para. 31)); and causing each user terminal to highlight each virtual store in the virtual shopping mall according to the degree of congestion of each corresponding physical store (a layout of the retail store can include an indication of traffic densities in various areas of the retail stores. (para. 230). Figure 12 and Figure 13 provide a way to characterize and present traffic densities. For example, traffic densities could be presented as a heat map (highlight) overlaid with a layout of the retail store (para. 272)). It would have been obvious to one of ordinary skill in the art at the time the invention was filed to have included the degree of congestion of customers in each physical store corresponding to each virtual store opened in the virtual shopping mall, and causing each user terminal to highlight each virtual store in the virtual shopping mall according to the degree of congestion of each corresponding physical store of Marshall, in the device of Spivack, in order to determine the value of display space in a manner that is reliable, accurate and unbiased (Marshall, para. 3), so as for the merchant to benefit from any sales of the product occurring through the retail store (Marshall, para. 2). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. Patent Application Publication No. 2014/0279233 to Lau et al. discloses a social platform for soliciting qualified opinions and validating shopping, engaging shoppers, and delivering feedback. The social platform can integrate with existing sites to provide options for interacting with other qualified users. U.S. Patent Application Publication No. 2020/0302510 to Chachek et al. discloses a system, device and method of augmented reality based mapping of a venue and navigation within a venue. The system includes: performing a crowd-sourced mapping process, that maps a retail store and maps particular products sold within that retail store, based on computer-vision analysis of a plurality of images captured by a plurality of end-user devices of customers within that retail store; and generating a representation of a store map reflecting actual real-time location of a particular products within that retail store. U.S. Patent Application Publication No. 2021/0142394 to Milicevic discloses a virtual shopping system having a user interface presenting a virtual storefront where in the user interacts with the virtual store-front perform shopping activities. WO 2012/075589 to Azba discloses a method and system for enabling realistic virtual shopping experience. “Empirical analysis of consumer reaction to the virtual reality shopping mall” to Lee et al. discloses internet shopping mall relating to the user interface of virtual reality shopping malls and whether the user interface positively affects customer satisfaction. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARISSA THEIN whose telephone number is (571)272-6764. The examiner can normally be reached M-F 8:30am - 5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Marissa Thein Supervisory Patent Examiner Art Unit 3625 /MARISSA THEIN/ Supervisory Patent Examiner, Art Unit 3689
Read full office action

Prosecution Timeline

Nov 03, 2023
Application Filed
Sep 07, 2025
Non-Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12584729
DUAL LASER MEASUREMENT DEVICE AND ONLINE ORDERING SYSTEM USING THE SAME
2y 5m to grant Granted Mar 24, 2026
Patent 12340411
COMPUTING TECHNIQUES TO PREDICT LOCATIONS TO OBTAIN PRODUCTS UTILIZING MACHINE-LEARNING
2y 5m to grant Granted Jun 24, 2025
Patent 11238512
COMPUTER-READABLE MEDIA, METHOD, AND SYSTEM FOR PRODUCING PHYSICAL ARTIFACTS
2y 5m to grant Granted Feb 01, 2022
Patent 10074121
Shopper Helper
2y 5m to grant Granted Sep 11, 2018
Patent 8949139
APPARATUS AND METHOD FOR MANAGING SCHEDULE IN PORTABLE TERMINAL
2y 5m to grant Granted Feb 03, 2015
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
28%
Grant Probability
60%
With Interview (+31.2%)
5y 4m
Median Time to Grant
Low
PTA Risk
Based on 219 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month