DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP §§ 706.02(l)(1) - 706.02(l)(3) for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/forms/. The filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claim(s) 1, 7, 8, 11, 17, 18 is/are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 5, 6, 9, 13, 14 of patent US 11250634. Although the claims at issue are not identical, they are not patentably distinct from each other because the pending claim(s) is/are an obvious variation of the patented claims, or entirely covered by the patented claims.
For example, claim 1 of the instant application discloses a method for inserting a candidate supplemental virtual object in a virtual environment, the method comprising: identifying, based on virtual environment information, a plurality of persistent virtual objects and a plurality of temporary virtual objects; retrieving the candidate supplemental virtual object; calculating, using a machine learning model, based on (i) the candidate supplemental virtual object, (ii) the plurality of persistent virtual objects, and (iii) the plurality of temporary virtual objects, a confidence value indicating whether the candidate supplemental virtual object fits into the virtual environment; and in response to determining that the confidence value is indicative of the candidate supplemental virtual object fitting in the virtual environment, inserting the candidate supplemental virtual object in the virtual environment. These limitations are all disclosed by the claim 1 of the patent US 11250634. Therefore, claim 1 of the instant application is covered by the claim 1 of the patent US 11250634, and is/are not patently distinct from the mentioned patent claim.
The following table illustrates a comparative mapping between the limitations of claim 1 of the instant application and the mapping claim 1 of patent US 11250634.
Claim 1 of the Instant Application 18631253
Claim 1 of the Patent 11250634
A method for inserting a candidate supplemental virtual object in a virtual environment, the method comprising:
A method for inserting supplemental content into a three-dimensional virtual environment, the method comprising:
identifying, based on virtual environment information, a plurality of persistent virtual objects and a plurality of temporary virtual objects;
identifying a first plurality of persistent virtual objects displayed in a plurality of consecutive virtual environment frames; identifying a second plurality of temporary virtual objects displayed in the plurality of consecutive virtual environment frames;
retrieving the candidate supplemental virtual object;
selecting a first virtual object from a first virtual environment frame of the plurality of consecutive virtual environment frames;
calculating, using a machine learning model, based on (i) the candidate supplemental virtual object, (ii) the plurality of persistent virtual objects, and (iii) the plurality of temporary virtual objects, a confidence value indicating whether the candidate supplemental virtual object fits into the virtual environment; and
training a machine learning model to calculate a confidence value that a candidate virtual object fits into a given virtual environment based on an input that includes (a) the candidate virtual object, (b) a list of persistent virtual objects in the virtual environment, and (c) a list of temporary virtual objects in the virtual environment, wherein the machine learning model is trained using a training example that predicts that the first virtual object fits into a virtual environment that comprises the first plurality of persistent virtual objects and the second plurality of temporary virtual objects; retrieving a candidate object comprising supplemental content for insertion into the virtual environment; determining, using the machine learning model, whether the candidate object fits into the virtual environment; and
in response to determining that the confidence value is indicative of the candidate supplemental virtual object fitting in the virtual environment, inserting the candidate supplemental virtual object in the virtual environment.
in response to determining that the candidate object fits in the virtual environment, inserting the candidate object into the virtual environment.
Similarly, Claim(s) 1, 9, 10, 11, 19, 20 is/are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 3, 6, 11, 13, 16 of patent US 11983828.
The following is a complete listing of the correspondence between the claim of the instant application and the patents:
Claims of the Instant Application 18631253
1
7
8
11
17
18
Claims of the Patent 11250634
1
5
6
9
13
14
Claims of the Instant Application 18631253
1
9
10
11
19
20
Claims of the Patent 11983828
1
3
6
11
13
16
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 7, 8, 10-13, 17, 18, 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kipman et al. (US 20170236332) in view of Lovitt (US 20210405959).
Regarding claim 11, Kipman discloses A system for inserting a candidate supplemental virtual object in a virtual environment (Kipman, “[0017] The vision system may be used to construct, in real time, world surfaces of the objects in the user's field of view (FOV). [0021] For instance, a social network application leveraging a social graph may be overlaid over an entire virtual world. [0051] The imagery, in this case, includes a fantastic virtual-reality (VR) display image 78, which is viewable on each of the display devices. [0059] FIG. 9 shows a parameter adjustment visual 92 overlaid on a scene. [0060] Other examples may include overlay of a theme selection visual overlaid on the scene sighted by the user”) comprising:
input/output circuitry configured to access virtual environment information (Kipman, “[0027] Near-eye display device 10 of FIG. 1 includes an input system 24 configured to receive a parameter value. [0034] image or video output from the flat-imaging and depth-imaging cameras may be co-registered and combined into a unitary (e.g., RGB+depth) data structure or stream.”); and control circuitry configured to:
identify, based on the virtual environment information, a plurality of persistent virtual objects and a plurality of temporary virtual objects (Kipman, “[0058] Any suitable processing approach may be used to differentiate the foreground object from everything else in the scene (e.g., foreground/background analysis using depth information, edge detection, and machine-learning recognizers). [0059] a ceiling or other object (foreground or background) may be recognized using any suitable process. For example, a depth image may be analyzed to find a generally horizontal overhead surface”. The foreground object correspond to the temporary virtual objects, and the background objects correspond to the persistent virtual objects);
retrieve the candidate supplemental virtual object, insert the candidate supplemental virtual object in the virtual environment (Kipman, “[0021] For instance, a social network application leveraging a social graph may be overlaid over an entire virtual world. [0051] The imagery, in this case, includes a fantastic virtual-reality (VR) display image 78, which is viewable on each of the display devices. [0059] FIG. 9 shows a parameter adjustment visual 92 overlaid on a scene. [0060] Other examples may include overlay of a theme selection visual overlaid on the scene sighted by the user. [0073] The display is configured to display virtual image content that adds an augmentation to a real-world environment viewed by a user of the mixed reality display device. The graphics processor is coupled operatively to the input system and to the display. The graphics processor is configured to render the virtual image content so as to variably change the augmentation, to variably change a perceived realism of the real world environment in correlation to the parameter value”. The virtual image content corresponds to the supplemental virtual object).
On the other hand, Lovitt discloses calculate, using a machine learning model, based on (1) the candidate supplemental virtual object (the virtual object), (2) the plurality of objects (the physical objects), a confidence value (object-matching score) indicating whether the candidate supplemental virtual object fits into the environment; and in response to determining that the confidence value is indicative of the candidate supplemental virtual object fitting in the environment, insert the candidate supplemental virtual object in the environment (Lovitt, “[0024] the augmented reality system can determine a physical object matches an analogous virtual object based on an object-matching score or other appropriate techniques. [0025] the augmented reality system renders a portion of the analogous virtual object as an overlay on the corresponding physical object. [0039] A virtual object can have features, characteristics, and other qualities (e.g., as defined by a model, a file, a database). [0181] Augmented reality system 1002 may generate, store, receive, and send augmented reality data, such as, for example, augmented reality scenes, augmented reality experiences, virtual objects, or other suitable data related to the augmented reality system 1002. [0132] the object-matching score generator 708 calculates the object-matching score associated with the physical object and a particular virtual object by identifying matches (e.g., character string matches, threshold matches) between the characteristics of the physical object and characteristics of the particular virtual object. [0133] After calculating object-matching scores for every combination of physical objects in the physical environment and virtual objects in the augmented reality experience, the object-matching score generator 708 can identify analogous virtual objects. the object-matching score generator 708 can determine that the virtual object is analogous to the physical object when the object-matching score associated with both is highest and when that score is satisfies an object-matching threshold. [0145] determining that the physical object within the physical environment corresponds to the analogous virtual object of an augmented reality experience can be based on image comparisons, description comparisons, heat maps, and/or machine learning”).
Lovitt teaches an virtual object is determined as analogous to the physical objects based on the object-matching score. The object-matching score is calculated based on the characteristics of the physical object and characteristics of the particular virtual object, therefore, the object-matching score is calculated based on (1) the virtual object, (2) the physical objects. The object-matching score indicates whether the virtual object fits in the environment of the physical objects. Particularly, the virtual object with the highest score or the score satisfying an object-matching threshold, is determined as fitting in the environment. As a result, the analogous virtual object is rendered as an overlay on the corresponding physical object of the environment, namely, the virtual object fitting in the environment is inserted in the environment.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Lovitt and Kipman, to include all limitations of claim 11. That is, replacing the environment with physical objects of Lovitt with the virtual environment with a plurality of persistent virtual objects and a plurality of temporary virtual objects of Kipman. Therefore, the object-matching score is calculated based on (i) the candidate supplemental virtual object, (ii) the plurality of persistent virtual objects, and (iii) the plurality of temporary virtual objects. Next, applying the overlaying the virtual object based on the object-matching score of Lovitt to inserting the virtual image content in the virtual environment of Kipman. The motivation/ suggestion would have been to modify or omit virtual graphics to depict the physical object as part of the augmented reality experience and extemporaneously modify the augmented reality experience based on user interactions with the physical object or corresponding virtual graphic (Lovitt, [0007]), and efficiently renders graphics or generates sound for the augmented reality experience—thereby reducing the computer processing and other computing resources for conventionally rendering such an experience (Lovitt, [0020]).
Regarding claim 17, Kipman in view of Lovitt discloses The system of claim 11, wherein the plurality of persistent virtual objects and the plurality of temporary virtual objects, have been disclosed.
On the other hand, Kipman fails to explicitly disclose but Lovitt discloses wherein the control circuitry, when retrieving the candidate supplemental virtual object, is configured to: identify attributes corresponding to the plurality of objects; access a database of a plurality of supplemental content items; determine that a supplemental content item of the plurality of supplemental content items has an attribute matching at least one attribute of the attributes; and in response to determining that the supplemental content item has the matching attribute, select the supplemental content item as the candidate supplemental virtual object (Lovitt, “[0024] the augmented reality system can further identify analogous virtual objects by determining threshold matches between the types, classifications, features, functions, and characteristics of the physical objects and the virtual objects. For instance, the augmented reality system can determine a physical object matches an analogous virtual object based on an object-matching score or other appropriate techniques. [0025] the augmented reality system renders a portion of the analogous virtual object as an overlay on the corresponding physical object. [0039] A virtual object can have features, characteristics, and other qualities (e.g., as defined by a model, a file, a database). [0133] the object-matching score generator 708 can determine that the virtual object is analogous to the physical object when the object-matching score associated with both is highest and when that score is satisfies an object-matching threshold. [0145] generating an object-matching score indicating a degree to which one or more characteristics of the physical object match one or more characteristics of the analogous virtual object; and determining the object-matching score satisfies an object-matching threshold. [0181] Augmented reality system 1002 may generate, store, receive, and send augmented reality data, such as, for example, augmented reality scenes, augmented reality experiences, virtual objects, or other suitable data related to the augmented reality system 1002”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Lovitt and Kipman, to include all limitations of claim 17. That is, replacing the objects of Lovitt with the plurality of persistent virtual objects and the plurality of temporary virtual objects of Kipman, and adding the method of Lovitt to select the candidate supplemental virtual object of Kipman. The same motivation of claim 11 applies here.
Regarding claim 18, Kipman in view of Lovitt discloses The system of claim 11.
On the other hand, Kipman fails to explicitly disclose but Lovitt discloses update, based on the virtual environment information, a list of displayed virtual objects in a time period starting from a first time and ending at a current time; and wherein the control circuitry is configured to retrieve the candidate supplemental virtual object based on comparing the candidate supplemental virtual object to the list of displayed virtual objects (Lovitt, “[0064] In at least one embodiment, the augmented reality system 102 can update or replace the virtual graphic overlay based on further user interactions with an area of the physical object on which the virtual graphic overlay is superimposed. [0110] As the augmented reality system 102 detects the user 112a continuing to flip through the physical pages of the physical book 606, the augmented reality system can continue to update or re-render the virtual graphic overlay 608 to approximate the reading progress of the user 112a through the corresponding augmented reality materials. [0145] For example, determining that the physical object within the physical environment corresponds to the analogous virtual object of an augmented reality experience can be based on image comparisons, description comparisons, heat maps, and/or machine learning”). The same motivation of claim 11 applies here.
Regarding claim 20, Kipman in view of Lovitt discloses The system of claim 11.
Kipman further discloses wherein the virtual environment information comprises an augmented reality (AR) overlay corresponding to an AR display device (Kipman, “[0002] Some display devices offer a mixed-reality (MR) experience, in which real objects in a user's field of view are combined with computerized, virtual imagery. Such devices may superpose informative textual overlays on real-world scenery or augment the user's world view with virtual content, for example. [0013] For example, a suitably configured display device may add a video overlay (e.g., a virtual hat) to an object or person sighted in a real video feed. Sound and/or feeling may also be added to the object or person. [0014] the display and vision system may be installed in an environment, such as a home, office, or vehicle. So-called ‘augmented-reality’ (AR) headsets, in which at least a portion of the outside world is directly viewable through the headset, may also be used.”).
On the other hand, Kipman fails to explicitly disclose but Lovitt discloses wherein the confidence value indicates whether the candidate supplemental virtual object is suitable for the AR overlay (Lovitt, “[0025] the augmented reality system renders a portion of the analogous virtual object as an overlay on the corresponding physical object. [0133] the object-matching score generator 708 can determine that the virtual object is analogous to the physical object when the object-matching score associated with both is highest and when that score is satisfies an object-matching threshold”). The same motivation of claim 11 applies here.
Regarding claim(s) 1-3, 7, 8, 10, they are interpreted and rejected for the same reasons set forth in claim(s) 11-13, 17, 18, 20, respectively.
Claim(s) 12, 13 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kipman et al. (US 20170236332) in view of Lovitt (US 20210405959), and further in view of Wu et al. (US 20180211113).
Regarding claim 12, Kipman in view of Lovitt discloses The system of claim 11.
On the other hand, Kipman in view of Lovitt fails to explicitly disclose but Wu discloses the virtual environment information comprises a sequence of virtual environment frames; each persistent virtual object of the plurality of persistent virtual objects is displayed in each virtual environment frame of the sequence of virtual environment frames; and each temporary virtual object of the plurality of temporary virtual objects is not displayed in at least one virtual environment frame of the sequence of virtual environment frames (Wu, “[0037] alternatively, background subtraction, which requires the estimation of the stationary scene background, followed by subtraction of the estimated background from the current frame can detect foreground objects (which include objects in motion). [0059] FIGS. 4A-4D show example frames of how videos analyzed may be marked automatically by the disclosed method. It shows four sample frames: a pedestrian “hanging out” (FIG. 4A), a vehicle approaching (FIG. 4B), a drug deal in progress (FIG. 4C), and leaving (FIG. 4D)”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Wu into the combination of Lovitt and Kipman, to include all limitations of claim 12. That is, applying the moving foreground and stationary background of Wu to the foreground and background objects of Kipman and Lovitt. The motivation/ suggestion would have been to broaden the use to various image systems such as PBI (Wu, [0004] PBI provides enhanced capabilities for data integration, analysis, visualization and distribution of information within and across agencies. PBI can assimilate data from all interconnected departments' databases as well as external sources to provide actionable insight for public safety commanders, allowing for rapid, fact-based decision making).
Regarding claim 13, Kipman in view of Lovitt and Guo discloses The system of claim 12.
On the other hand, Kipman in view of Lovitt fails to explicitly disclose but Wu discloses wherein a first temporary virtual object of the plurality of temporary virtual objects is not displayed in a different virtual environment frame of the sequence of virtual environment frames than a second temporary virtual object of the plurality of temporary virtual objects (Wu, “[0037] alternatively, background subtraction, which requires the estimation of the stationary scene background, followed by subtraction of the estimated background from the current frame can detect foreground objects (which include objects in motion). [0059] FIGS. 4A-4D show example frames of how videos analyzed may be marked automatically by the disclosed method. It shows four sample frames: a pedestrian “hanging out” (FIG. 4A), a vehicle approaching (FIG. 4B), a drug deal in progress (FIG. 4C), and leaving (FIG. 4D)”). The same motivation of claim 12 applies here.
Regarding claim(s) 2-3, they are interpreted and rejected for the same reasons set forth in claim(s) 12-13, respectively.
Claim(s) 6, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kipman et al. (US 20170236332) in view of Lovitt (US 20210405959), and further in view of Regelous et al. (US 20090187529).
Regarding claim 16, Kipman in view of Lovitt discloses The system of claim 11.
Kipman further discloses wherein the virtual environment information comprises frames corresponding to a field of view, wherein the plurality of persistent virtual objects and the plurality of temporary virtual objects are identified from objects displayed in the frames corresponding to the field of view (Kipman, “[0002] Some display devices offer a mixed-reality (MR) experience, in which real objects in a user's field of view are combined with computerized, virtual imagery. [0017] The user may access the MR mixer herein via any suitable user-interface (UI) modality. In one embodiment, a world-facing vision system includes both depth- and flat-imaging cameras. The vision system may be used to construct, in real time, world surfaces of the objects in the user's field of view (FOV). [0057] For devices that utilize world-facing cameras and an electronic display, a reduced resolution effect may be achieved by identifying the foreground object in an image captured by the world-facing camera and pixelating a portion of the image corresponding to the identified foreground object. [0058] Any suitable processing approach may be used to differentiate the foreground object from everything else in the scene (e.g., foreground/background analysis using depth information, edge detection, and machine-learning recognizers)”).
On the other hand, Kipman in view of Lovitt fails to explicitly disclose but Regelous discloses wherein the control circuitry is further configured to: assign a weight for each objects based on object visibility in the frames corresponding to the field of view (Regelous, “[0032] a processor arranged for processing the weighted memory, generating an image of the environment from the perspective of the entity, recognising visible objects within the image from a list of object types, storing data about the visible objects within the weighted memory, modifying the weight of objects stored in the weighted memory depending on object visibility”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Regelous into the combination of Lovitt and Kipman, to include all limitations of claim 16. That is, applying the modifying the weight of objects depending on object visibility of Regelous to the foreground and background objects of Kipman and Lovitt. Therefore, assign a weight for each persistent virtual object of the plurality of persistent virtual objects and each temporary virtual object of the plurality of temporary virtual objects based on object visibility in the frames corresponding to the field of view. The motivation/ suggestion would have been generating autonomous behaviour for graphics characters and robots using visual information from the perspective of the characters/robots (Regelous, [0001]).
Regarding claim(s) 6, it is interpreted and rejected for the same reasons set forth in claim(s) 16.
Claim(s) 9, 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Kipman et al. (US 20170236332) in view of Lovitt (US 20210405959), and further in view of Abraham et al. (US 20180032818).
Regarding claim 19, Kipman in view of Lovitt discloses The system of claim 11.
On the other hand, Kipman in view of Lovitt fails to explicitly disclose but Abraham discloses when retrieving the candidate supplemental virtual object, is configured to: identify a plurality of available virtual objects; determine respective confidence weights for the plurality of available virtual objects based on how recent a respective available virtual object was displayed; and select an available virtual object of the plurality of available virtual objects based on the respective confidence weights (Abraham, “[0038] the garment data 135 can indicate garments that, together, form an outfit. In another example, the processing system 110, or a cognitive system to which the processing system 110 is communicatively linked, can be used to process images, text and/or other data retrieved from the fashion data sources 145 to generate outfit data indicating combinations of garments that are used to form outfits. [0049] the processing system 110 identify the garments being put on based on the image processing. Based on the gestures made by the user 170 while viewing the modified images 180, the processing system 110 can infer the user's sentiment toward various modified images 180, and thus infer the user's sentiment to the garments/outfits presented in the modified images 180. [0054] Accordingly, the processing system 110 can provide greater weight to more recent data and less weight to older data when selecting garment styles and garments”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Abraham into the combination of Lovitt and Kipman, to include all limitations of claim 19. That is, applying the selecting based on how recent the data is of Abraham to select the virtual object of Kipman and Lovitt. The motivation/ suggestion would have been the processing system can repeatedly learn the user's fashion tastes and improve recommendations of garments/outfits to the user 170 (Abraham, [0054]).
Regarding claim(s) 9, it is interpreted and rejected for the same reasons set forth in claim(s) 19.
Allowable Subject Matter
Claim(s) 4, 5, 14, 15 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim 14, it recites, wherein the control circuitry, when calculating, using the machine learning model, based on (i) the candidate supplemental virtual object, (ii) the plurality of persistent virtual objects, and (iii) the plurality of temporary virtual objects, the confidence value, is configured to: access a plurality of attributes corresponding to the candidate supplemental virtual object, the plurality of persistent virtual objects, and the plurality of temporary virtual objects; determine, for the plurality of attributes, corresponding input nodes; input, to the corresponding input nodes, the plurality of attributes; process, starting from the corresponding input nodes via one or more hidden layers, the plurality of attributes; determine, for the plurality of attributes based on the processing, an output node; and calculate, based on the output node, the confidence value.
None of the prior arts on the record or any of the prior arts searched, alone or in combination, renders obvious the combination of elements recited in the claim(s) as a whole.
Regarding claim 4, it is interpreted and allowed under similar rationale as set forth above in claim 14.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRACE Q LI whose telephone number is (571)270-0497. The examiner can normally be reached Monday - Friday, 8:00 am-5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DEVONA FAULK can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/GRACE Q LI/Primary Examiner, Art Unit 2618 12/23/2025