Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
This Office Action is in response to Applicant’s Amendment filed on December 23, 2025, for U.S. patent application 17863108 filed on July 12, 2022.
Claims 1 and 12 are independent claims; claims 1, 4-5, 8-10,12-13 and 15 remain pending, and have been examined in this application. This Action is made FINAL.
Response to Arguments
The rejections of remaining claims 1, 5, 8-10,12-13 and 15 under 35 U.S.C. § 101 are withdrawn in view of Applicant’s December 23, 2025, claim amendments.
Applicants’ arguments in the instant Amendment, filed on December 23, 2025, with respect to limitations listed below, have been fully considered but they are not persuasive as follows.
Applicant’s arguments: “In contrast, Tsibulevskiy discloses artificial neural networks or machine learning algorithms, which may be trained to recognize when an object in a figure is not recognized correctly. See, ¶0129. To be clear, there is NO request to change a value, where the change is then permitted based on any machine learning rule. Specifically, in rejecting Claim 1, the Office relies on Tsibulevskiy to disclose a request for a change to a value of an identity attribute from a user captured image. See, Office action dated Oct. 2, 2025, at p. 14-15, citing ¶¶0077, 0427, and 0443. In ¶0077, Tsibulevskiy discloses figures or files uploaded or selected by a user, or retrieved from a source identified by the user. There is no request to change the figure or file.”
The Examiner disagrees with the Applicants. The Examiner respectfully submits that Tsibulevskiy does include a request to change a figure or file. More particularly, see: Tsibulevskiy para. [0129], “machine learning algorithms, as described herein, can be trained or further trained or updated, which can be in real-time, based on user operations, as described herein … user (e.g., editor profile) may be enabled to manually correct that non-recognition or incorrect recognition or move or adjust or resize or reshape that identifier or shape or label to be non or less visually interfering. As such, the neural networks or machine learning algorithms can track those user actions, which can be in real-time, and learn from those, which can be in real-time. Then, recognition or placement algorithms (or other text or image processing algorithms) can be updated”; Tsibulevskiy para. [0168], “documents (e.g., files, images, videos, PDF files)”. That is, a user’s manual operation to correct: non-recognition; incorrect recognition; and/or move, adjust, resize or reshape, is being interpreted as a user-user request to change a figure or file.
Applicant’s arguments: “At ¶0427, Tsibulevskiy discloses supervised learning, where the user can identify non-recognition or part numbers in the figure or incorrect recognition of part numbers in the figure. The user then identifies the pixel associated with the non-recognized or incorrectly recognized part numbers which are designated with a bounding box. Importantly, this is part of the training or learning for the model. This is not a request to change a value, but a designation of the specific pixels which are then "inserted into a learning model for teaching." That is, the user merely designates the pixel patterns, which are incorrect, for use in training. The user is not requesting, in any way, then, to change the non-recognition or incorrect recognition. And also, at ¶0443, Tsibulevskiy discloses the validated labels in a training set being further validated "by a corrective or non-correction by an editor profile." This is simply improving the training set for the model through validation of specific labels. There is no disclosure of a request, from a user, to change a value from an image captured by the user. As such, Tsibulevskiy is deficient.”
The Examiner disagrees with the Applicants. The Examiner respectfully submits that Tsibulevskiy does include disclosure of a request, from a user, to change a value from an image captured by the user. See Tsibulevskiy para. [0129], ”For example, if a certain reference, part number, or object in a figure was not recognized at all, or if a certain reference, part number, or object in a figure was recognized and that recognition is not correct, or a certain identifier or label was presented in or over a figure to be visually interfering with other figure content, then the user (e.g., editor profile) may be enabled to manually correct that non-recognition or incorrect recognition or move or adjust or resize or reshape that identifier or shape or label to be non or less visually interfering. That is, a user’s manual operation to correct: non-recognition; incorrect recognition; and/or move, adjust, resize or reshape, is being interpreted as a user-user request to change a figure or file.
Applicant’s arguments: “Further, in rejecting Claim 1, the Office relies on Tsibulevskiy to disclose determining whether the user change is consistent with the at least one rule - generated by the machine learning model. See, Office action dated Oct. 2, 2025, at p. 14-15, citing ¶¶0182, 0084 and 0443. Initially, the rule is generated by a machine learning model, whereby citation to training data used to train a model, such as a neural network, CANNOT be consistent or inconsistent with a rule generated by the trained model. The model is not yet trained to generate the rule. What's more, manual labeling is not a determination as to whether a generated rule is consistent with the change or not. See,¶0182. And, the next time the "trained model" encounters a situation, it will not be in response to a user change, but automated through the trained model. There is NO situation in Tsibulevskiy (as cited) in which a user requests a change in a value from an image captured by the user (as explained above) and then a determination is made as to whether that requested change is consistent with a rule or not. Tsibulevskiy is deficient. And, at ¶0084, Tsibulevskiy discloses other users viewing a user's association. And at ¶0443, Tsibulevskiy discloses user labeling to be used to create training data for a model. This is not what is claimed. There is no determination of whether a user requested change is consistent or inconsistent with a machine learning generated rule. As such, Tsibulevskiy is deficient.”
The Examiner disagrees with the Applicants. The Examiner respectfully submits that Tsibulevskiy does include determination of whether a user requested change is consistent or inconsistent with a machine learning generated rule. See Tsibulevskiy para. [0153], “Once the label or tag has reached a satisfactory location or area, then the user may take an action, such as clicking a button of a pointer or cursor device to position or anchor the label or tag. Note that this label or tag movement can also occur automatically or responsively before, during, or after label positioning on figure (e.g., when image processing algorithm determines that label or tag currently overlaps or would overlap or otherwise visually interfere with other content in figure).” The user taking “an action, such as clicking a button of a pointer or cursor device to position or anchor the label or tag” is being interpreted as a request. Relatedly, “when image processing algorithm determines that label or tag currently overlaps or would overlap or otherwise visually interfere with other content in figure” is being interpreted as determination of a consistency/inconsistency.
Applicant’s arguments: “Also, in rejecting Claim 1, the Office relies on Tsibulevskiy to disclose a security feature of the first type of physical document coinciding with a location of the at least one identity attribute on the first type of physical document, which impedes extraction of the at least one identity attribute. See, Office action dated Oct. 2, 2025, at p. 12-13, citing ¶0129. There is no security feature disclosed in Tsibulevskiy. As cited, Tsibulevskiy merely discloses that a label or identifier may be presented over a figure, which visually interferes with the content of the figure. There is no indication that the label or the figure is a security feature of the physical document. Tsibulevskiy is deficient, and the suggested combination, which relies on Tsibulevskiy, is also deficient.”
The Examiner disagrees with the Applicants. The Examiner respectfully submits that Tsibulevskiy does include security features. For example, see Tsibulevskiy. “user access permissions” in: Tsibulevskiy para. [0152]-[153], “User access permissions may include review/comment or review only. Other user access permissions, which can be tiered, may be possible. … Some users access permissions can be …stored within the document. Sometimes, the user comments can be …tagged to or stemming from objects, …[0153] Movable … tags … if a label or tag overlaps or would overlap or otherwise visually interfere with other content of the figure (e.g., object, line, reference, identifier, shape) or other labels or tags, then that label or tag may be manually moved to avoid or minimize or reduce such overlap. For example, the label or tag may be moved by stylus or finger movement or dragged via a pointer or cursor or gesture. Once the label or tag has reached a satisfactory location or area, then the user may take an action, such as clicking a button of a pointer or cursor device to position or anchor the label or tag. Note that this label or tag movement can also occur automatically or responsively before, during, or after label positioning on figure (e.g., when image processing algorithm determines that label or tag currently overlaps or would overlap or otherwise visually interfere with other content in figure).”;
The Examiner respectfully suggests that the claim be further amended; details in the specification be incorporated, to distinguish the claimed invention over prior art of record. Should the Applicant desire an interview to further clarify the claim interpretation/rejections, please contact the Examiner at (571) 272-2642 to schedule an interview.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 4 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Tsibulevskiy et al. (“Tsibulevskiy”; US20220319219A1) in view of Iyer et al. (“Iyer”; US20190245693A1).
Per claim 1: Tsibulevskiy discloses a computer-implemented method for use in changing attributes associated with user identities, based on event driven rules (Tsibulevskiy para. [0034], “A determination can be made of which selective geographic regions of the figure can be performed automatically, via a preset rule (e.g., specific quadrant or grid element) or manually, as explained above. The figure can be searched via text searching, after computer vision, OCR, barcode reading, edge detection, segmentation, image segmentation, character segmentation, object detection, feature detection, or other image processing algorithms, or other algorithms”), the method comprising:
generating, by a computing device of an identity provider, using a machine learning model, a plurality of rules, based on historical data representative of approved changes to identity attributes derived from physical documents (Tsibulevskiy para. [0129], “machine learning algorithms, as described herein, can be trained or further trained or updated, which can be in real-time, based on user operations, as described herein … user (e.g., editor profile) may be enabled to manually correct that non-recognition or incorrect recognition or move or adjust or resize or reshape that identifier or shape or label to be non or less visually interfering. As such, the neural networks or machine learning algorithms can track those user actions, which can be in real-time, and learn from those, which can be in real-time. Then, recognition or placement algorithms (or other text or image processing algorithms) can be updated”; Tsibulevskiy para. [0168], “documents (e.g., files, images, videos, PDF files)”),
wherein a first one of the plurality of rules permits a change to at least one identity attribute derived from a first type of physical document, based on a first pattern in the historical data of approved changes to the at least one identity attribute due to a security feature of the first type of physical document coinciding with a location of the at least one identity attribute on the first type of physical document (Tsibulevskiy para. [0152]-[153], “User access permissions may include review/comment or review only. Other user access permissions, which can be tiered, may be possible. … Some users access permissions can be …stored within the document. Sometimes, the user comments can be …tagged to or stemming from objects, …[0153] Movable … tags … if a label or tag overlaps or would overlap or otherwise visually interfere with other content of the figure (e.g., object, line, reference, identifier, shape) or other labels or tags, then that label or tag may be manually moved to avoid or minimize or reduce such overlap. For example, the label or tag may be moved by stylus or finger movement or dragged via a pointer or cursor or gesture. Once the label or tag has reached a satisfactory location or area, then the user may take an action, such as clicking a button of a pointer or cursor device to position or anchor the label or tag. Note that this label or tag movement can also occur automatically or responsively before, during, or after label positioning on figure (e.g., when image processing algorithm determines that label or tag currently overlaps or would overlap or otherwise visually interfere with other content in figure).”; Tsibulevskiy para. [0129], “machine learning algorithms, as described herein, can be trained or further trained or updated, which can be in real-time, based on user operations, as described herein, relative to figures or text, which can be in real-time. For example, if a certain reference, part number, or object in a figure was not recognized at all, or if a certain reference, part number, or object in a figure was recognized and that recognition is not correct, or a certain identifier or label was presented in or over a figure to be visually interfering with other figure content, then the user (e.g., editor profile) may be enabled to manually correct that non-recognition or incorrect recognition or move or adjust or resize or reshape that identifier or shape or label to be non or less visually interfering. As such, the neural networks or machine learning algorithms can track those user actions, which can be in real-time, and learn from those, which can be in real-time.”), which impedes extraction of the at least one identity attribute (Tsibulevskiy para. [0129], “machine learning algorithms, as described herein, can be trained or further trained or updated, which can be in real-time, based on user operations, as described herein, relative to figures or text, which can be in real-time. For example, if a certain reference, part number, or object in a figure was not recognized at all, or if a certain reference, part number, or object in a figure was recognized and that recognition is not correct, or a certain identifier or label was presented in or over a figure to be visually interfering with other figure content, then the user (e.g., editor profile) may be enabled to manually correct that non-recognition or incorrect recognition or move or adjust or resize or reshape that identifier or shape or label to be non or less visually interfering. As such, the neural networks or machine learning algorithms can track those user actions, which can be in real-time, and learn from those, which can be in real-time.”);
receiving, at the computing device, from a mobile device of a user (Tsibulevskiy para. [0088], “figure can be obtained (e.g., retrieved, downloaded) from an image capture device (e.g., camera, smartphone, tablet, laptop”; Tsibulevskiy para. [0110], “user can use a device (e.g., smartphone, tablet, laptop, desktop, wearable, contact lens, eyeglass lens, eyeglasses, head-mounted or eyewear frame, Google glass) with an optical camera (e.g., still, video, CMOS, with wide-angle lens, with fisheye lens) to perform various processing, as disclosed herein”), a request related to a digital identity of the user, which includes i) identification information digital identity of a user, which includes i) identification information for the user captured at the mobile device from an image of a source physical document, the identification information including a value of the at least one identity attribute and ii) a user change to the value of the at least one identity attribute (Tsibulevskiy para. [0077], “the user computer 710 can access the first server 720 and request the first server 720 to perform the visual association on a selected figure or a file (e.g., uploaded by or selected by the user or retrieved from a data source or file sharing service pointed or identified by the user). The first server 720 then accesses or downloads the selected figure or the file and performs the visual association thereon or on a copy thereof or an extracted image therefrom”; Tsibulevskiy para. [0427], “Some embodiments may include supervised learning where non-recognition of part numbers in figures or incorrect recognition of part numbers in figures can be identified by the user upon figure sheet review. Then, the user can instruct to have a pixel pattern (e.g., part number) enclosed within an overlaid visual marker or green (or another color to indicate presence of part number in text) bounding box to be associated with a set of pixel coordinates, whether new or different from original, (e.g., moving via dragging visual marker or bounding box) and inserted into a learning model for training, which can teach what something is or what something is not, which can learn from corrections or deletions or additions”; Tsibulevskiy para. [0443], “correspondence (e.g., validation), which can include validation by an editor profile, whether by a corrective action or a non-correction, which can add more weight to that correspondence, then the pixel coordinates, the values, and the character patterns form a set of labeled data, which can be used to train or enhance training or supplement training of an OCR engine or supplement an OCR engine, which can be supervised or unsupervised”; Tsibulevskiy para. [0182], “an image capture device (e.g., camera, smartphone, tablet, laptop, desktop, unmanned land or aerial or marine vehicle, webcam, eyewear unit, scanner) and matched with the text (e.g., based on figure identifier, document identifier”; Tsibulevskiy para. [0360], “matching can include matching a human face (first content item) in a photo to a plurality of human faces (second content items) in a plurality of photos based on facial similarity therebetween”; Tsibulevskiy para. [0360], “matching can include matching a human body part or bone or organ (first content item) in a medical imaging scan image to a plurality of human body parts or bones or organs (second content item) in a plurality of medical imaging scan images based on human body part or bone or organ similarity therebetween”);
retrieving, by the computing device, the first one of the plurality of rules based on the user change being directed to the at least one identity attribute and/or the source physical document being the first type of physical document (Tsibulevskiy para. [0243], “the machine learning algorithm or the neural network algorithm can actively learn in real-time (or not real-time) from such activities and update in real-time (or not real-time) its relevant machine learning or neural network data (e.g., character recognition models, part number recognition models, image segmentation model, object detection model, pattern or character avoidance or skipping model, input-output example pairs) for subsequent or future recognitions, mappings, labeling, or other activities relative to figures or text”; Tsibulevskiy, para. [0034], “A determination can be made of which selective geographic regions of the figure can be performed automatically, via a preset rule (e.g., specific quadrant or grid element) or manually, as explained above. The figure can be searched via text searching, after computer vision, OCR, barcode reading, edge detection, segmentation, image segmentation, character segmentation, object detection, feature detection, or other image processing algorithms, or other algorithms have run on that figure”);
determining, by the computing device, that the user change to the value of the at least one identity attribute is consistent with the first one of the plurality of rules (Tsibulevskiy para. [0182], “this part number can be manually labeled or corrected so that a neural network or machine learning algorithm, as described herein, can learn of this manual labeling or correction and then recognize this situation next time and correctly recognize this part number or format or similar use, whether for same or different part number”; Tsibulevskiy para. [0084], “In a computer network environment, one user can perform a visual association process on a figure (or copy thereof), as described herein, such that the visually associated figure is then stored in the database and other users can be granted read access to the visually associated figure. Thus, other users can avoid repetition of the visual association process in order to improve efficiency and save computational resources”; Tsibulevskiy para. [0443], “correspondence (e.g., validation), which can include validation by an editor profile, whether by a corrective action or a non-correction, which can add more weight to that correspondence, then the pixel coordinates, the values, and the character patterns form a set of labeled data, which can be used to train or enhance training or supplement training of an OCR engine or supplement an OCR engine, which can be supervised or unsupervised”); and
based on the user change to the value of the at least one identity attribute being consistent with the first one of the plurality of rules (Tsibulevskiy paras. [0359]-[0360], “the second content items are searched in order to determine if any of the second content items match the first content item, as described herein. [0360] This form of matching can be based on content item similarity (or dissimilarity) based on various content attributes or data container attributes. For example, this form of matching can include matching a human face (first content item) in a photo to a plurality of human faces (second content items) in a plurality of photos based on facial similarity therebetween.”):
effecting, by the computing device, the user change to the value of the at least one identity attribute (Tsibulevskiy para. [0363], “The first human face, the second human face, and the third human face can be matched to each other based on similarity (or dissimilarity) and a corresponding mapping or index between the first human face, the second human face, and the third human face can be formed, as described herein, (or the first image, the second image, and the third image can be tagged, related, or associated with metadata informative of such matching content, as described herein).”); and
storing …the changed value of the at least one identity attribute as part of the digital identity for the user (Tsibulevskiy para. [0182], “this part number can be manually labeled or corrected so that a neural network or machine learning algorithm, as described herein, can learn of this manual labeling or correction and then recognize this situation next time and correctly recognize this part number or format or similar use, whether for same or different part number”; Tsibulevskiy para. [0443], “correspondence (e.g., validation), which can include validation by an editor profile, whether by a corrective action or a non-correction, which can add more weight to that correspondence, then the pixel coordinates, the values, and the character patterns form a set of labeled data, which can be used to train or enhance training or supplement training of an OCR engine or supplement an OCR engine, which can be supervised or unsupervised”; Tsibulevskiy para. [0363], “The corresponding mapping or index can be stored internal or external to the social networking service user profile.”).
Tsibulevskiy does not disclose the underlined features of “storing, in a blockchain data structure of the identity provider, the changed value of the at least one identity attribute as part of the digital identity for the user in the blockchain data structure”.
However, in an analogous art, Iyer discloses an arrangement storing data in a blockchain data structure of an identity provider (Iyer para. [0016], “the IDP 102 is configured to then compile the digital identity for the user 114 and to store the digital identity in the ledger data structure 110 associated with the IDP 102. As such, the ledger data structure 110 includes the user's digital identity and other digital identities for other users, and corresponding certification records therefore (together or separately). In this exemplary embodiment, the ledger data structure 110 includes a block chain data structure”).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Tsibulevskiy to include, as taught by Iyer, an arrangement storing data in a blockchain data structure of an identity provider. Motivation for modifying would have been to include the well-known use of an arrangement allowing the combination to store data in blockchain, in order to increase versatility, attractiveness and broadened adoption of the Tsibulevskiy/Iyer combination within the security field.
Per claim 4: The Tsibulevskiy/Iyer combination discloses the method of claim 1. Tsibulevskiy further discloses an arrangement:
capturing, by the mobile device, the image of the source physical document (Tsibulevskiy para. [0182], “an image capture device (e.g., camera, smartphone, tablet, laptop, desktop, unmanned land or aerial or marine vehicle, webcam, eyewear unit, scanner) and matched with the text (e.g., based on figure identifier, document identifier”);
extracting the value of the at least one identity attribute from the image of the source physical document (Tsibulevskiy para. [0182], “this part number can be manually labeled or corrected so that a neural network or machine learning algorithm, as described herein, can learn of this manual labeling or correction and then recognize this situation next time and correctly recognize this part number or format or similar use, whether for same or different part number”; Tsibulevskiy para. [0443], “correspondence (e.g., validation), which can include validation by an editor profile, whether by a corrective action or a non-correction, which can add more weight to that correspondence, then the pixel coordinates, the values, and the character patterns form a set of labeled data, which can be used to train or enhance training or supplement training of an OCR engine or supplement an OCR engine, which can be supervised or unsupervised”);
receiving, at the mobile device, from the user, the user change to the value of the at least one attribute (Tsibulevskiy para. [0443], “correspondence (e.g., validation), which can include validation by an editor profile, whether by a corrective action or a non-correction, which can add more weight to that correspondence, then the pixel coordinates, the values, and the character patterns form a set of labeled data, which can be used to train or enhance training or supplement training of an OCR engine or supplement an OCR engine, which can be supervised or unsupervised”) and
transmitting, by the mobile device, the user change to the value of the at least one identity attribute to the computing device (Twibulevskiy para. [0072], “can be embodied as or included in any type of a computer (e.g., desktop, laptop, mainframe, cloud-computing system, cluster computing system, server cluster, smartphone (e.g., bendable, flexible, folding, rigid), tablet (e.g., bendable, flexible, folding, rigid)”; Tsibulevskiy para. [0443], “correspondence (e.g., validation), which can include validation by an editor profile, whether by a corrective action or a non-correction, which can add more weight to that correspondence, then the pixel coordinates, the values, and the character patterns form a set of labeled data, which can be used to train or enhance training or supplement training of an OCR engine or supplement an OCR engine, which can be supervised or unsupervised”).
Per claim 12:. Tsibulevskiy discloses a system for use in changing attributes associated with user identities (Tsibulevskiy para. [0034], “A determination can be made of which selective geographic regions of the figure can be performed automatically, via a preset rule (e.g., specific quadrant or grid element) or manually, as explained above. The figure can be searched via text searching, after computer vision, OCR, barcode reading, edge detection, segmentation, image segmentation, character segmentation, object detection, feature detection, or other image processing algorithms, or other algorithms”), the system comprising:
an identity provider including a computing device, which includes a non-transitory memory, the computing device configured, by executable instruction included in the non-transitory memory (Tsibulevskiy para. [0448], “Various embodiments of the present disclosure may be implemented in a data processing system suitable for storing and/or executing program code that includes at least one processor coupled directly or indirectly to memory elements through a system bus.”; Tsibukevskiy para.[0450], ”examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory)”) to:
generate, using a machine learning model, a plurality of rules, based on historical data representative of approved changes to identity attributes derived from physical documents (Tsibulevskiy para. [0129], “machine learning algorithms, as described herein, can be trained or further trained or updated, which can be in real-time, based on user operations, as described herein … user (e.g., editor profile) may be enabled to manually correct that non-recognition or incorrect recognition or move or adjust or resize or reshape that identifier or shape or label to be non or less visually interfering. As such, the neural networks or machine learning algorithms can track those user actions, which can be in real-time, and learn from those, which can be in real-time. Then, recognition or placement algorithms (or other text or image processing algorithms) can be updated”; Tsibulevskiy para. [0168], “documents (e.g., files, images, videos, PDF files)”),
wherein a first one of the plurality of rules permits a change to at least one identity attribute derived from a first type of physical document, based on a first pattern in the historical data of approved changes to the at least one identity attribute due to a security feature of the first type of physical document coinciding with a location of the at least one identity attribute on the first type of physical document (Tsibulevskiy para. [0152]-[153], “User access permissions may include review/comment or review only. Other user access permissions, which can be tiered, may be possible. … Some users access permissions can be …stored within the document. Sometimes, the user comments can be …tagged to or stemming from objects, …[0153] Movable … tags … if a label or tag overlaps or would overlap or otherwise visually interfere with other content of the figure (e.g., object, line, reference, identifier, shape) or other labels or tags, then that label or tag may be manually moved to avoid or minimize or reduce such overlap. For example, the label or tag may be moved by stylus or finger movement or dragged via a pointer or cursor or gesture. Once the label or tag has reached a satisfactory location or area, then the user may take an action, such as clicking a button of a pointer or cursor device to position or anchor the label or tag. Note that this label or tag movement can also occur automatically or responsively before, during, or after label positioning on figure (e.g., when image processing algorithm determines that label or tag currently overlaps or would overlap or otherwise visually interfere with other content in figure).”; Tsibulevskiy para. [0129], “machine learning algorithms, as described herein, can be trained or further trained or updated, which can be in real-time, based on user operations, as described herein, relative to figures or text, which can be in real-time. For example, if a certain reference, part number, or object in a figure was not recognized at all, or if a certain reference, part number, or object in a figure was recognized and that recognition is not correct, or a certain identifier or label was presented in or over a figure to be visually interfering with other figure content, then the user (e.g., editor profile) may be enabled to manually correct that non-recognition or incorrect recognition or move or adjust or resize or reshape that identifier or shape or label to be non or less visually interfering. As such, the neural networks or machine learning algorithms can track those user actions, which can be in real-time, and learn from those, which can be in real-time.”), which impedes extraction of the at least one identity attribute (Tsibulevskiy para. [0129], “machine learning algorithms, as described herein, can be trained or further trained or updated, which can be in real-time, based on user operations, as described herein, relative to figures or text, which can be in real-time. For example, if a certain reference, part number, or object in a figure was not recognized at all, or if a certain reference, part number, or object in a figure was recognized and that recognition is not correct, or a certain identifier or label was presented in or over a figure to be visually interfering with other figure content, then the user (e.g., editor profile) may be enabled to manually correct that non-recognition or incorrect recognition or move or adjust or resize or reshape that identifier or shape or label to be non or less visually interfering. As such, the neural networks or machine learning algorithms can track those user actions, which can be in real-time, and learn from those, which can be in real-time.”);
receive, from a mobile device of a user (Tsibulevskiy para. [0088], “figure can be obtained (e.g., retrieved, downloaded) from an image capture device (e.g., camera, smartphone, tablet, laptop”; Tsibulevskiy para. [0110], “user can use a device (e.g., smartphone, tablet, laptop, desktop, wearable, contact lens, eyeglass lens, eyeglasses, head-mounted or eyewear frame, Google glass) with an optical camera (e.g., still, video, CMOS, with wide-angle lens, with fisheye lens) to perform various processing, as disclosed herein”) , a request related to a digital identity of the user, which includes i) identification information for the user captured at the mobile device from an image of a source physical document, the identification information including a value of the at least one identity attribute and ii) a user change to the value of the at least one identity attribute (Tsibulevskiy para. [0077], “the user computer 710 can access the first server 720 and request the first server 720 to perform the visual association on a selected figure or a file (e.g., uploaded by or selected by the user or retrieved from a data source or file sharing service pointed or identified by the user). The first server 720 then accesses or downloads the selected figure or the file and performs the visual association thereon or on a copy thereof or an extracted image therefrom”; Tsibulevskiy para. [0427], “Some embodiments may include supervised learning where non-recognition of part numbers in figures or incorrect recognition of part numbers in figures can be identified by the user upon figure sheet review. Then, the user can instruct to have a pixel pattern (e.g., part number) enclosed within an overlaid visual marker or green (or another color to indicate presence of part number in text) bounding box to be associated with a set of pixel coordinates, whether new or different from original, (e.g., moving via dragging visual marker or bounding box) and inserted into a learning model for training, which can teach what something is or what something is not, which can learn from corrections or deletions or additions”; Tsibulevskiy para. [0443], “correspondence (e.g., validation), which can include validation by an editor profile, whether by a corrective action or a non-correction, which can add more weight to that correspondence, then the pixel coordinates, the values, and the character patterns form a set of labeled data, which can be used to train or enhance training or supplement training of an OCR engine or supplement an OCR engine, which can be supervised or unsupervised”; Tsibulevskiy para. [0182], “an image capture device (e.g., camera, smartphone, tablet, laptop, desktop, unmanned land or aerial or marine vehicle, webcam, eyewear unit, scanner) and matched with the text (e.g., based on figure identifier, document identifier”; Tsibulevskiy para. [0360], “matching can include matching a human face (first content item) in a photo to a plurality of human faces (second content items) in a plurality of photos based on facial similarity therebetween”; Tsibulevskiy para. [0360], “matching can include matching a human body part or bone or organ (first content item) in a medical imaging scan image to a plurality of human body parts or bones or organs (second content item) in a plurality of medical imaging scan images based on human body part or bone or organ similarity therebetween”);
retrieve the first one of the plurality of rules based on the user change being directed to the at least one identity attribute and/or the source physical document being the first type of physical document (Tsibulevskiy para. [0243], “the machine learning algorithm or the neural network algorithm can actively learn in real-time (or not real-time) from such activities and update in real-time (or not real-time) its relevant machine learning or neural network data (e.g., character recognition models, part number recognition models, image segmentation model, object detection model, pattern or character avoidance or skipping model, input-output example pairs) for subsequent or future recognitions, mappings, labeling, or other activities relative to figures or text”; Tsibulevskiy, para. [0034], “A determination can be made of which selective geographic regions of the figure can be performed automatically, via a preset rule (e.g., specific quadrant or grid element) or manually, as explained above. The figure can be searched via text searching, after computer vision, OCR, barcode reading, edge detection, segmentation, image segmentation, character segmentation, object detection, feature detection, or other image processing algorithms, or other algorithms have run on that figure”);
determine whether the user change to the value of the at least one identity attribute is consistent with the first one of the plurality of rules (Tsibulevskiy para. [0182], “this part number can be manually labeled or corrected so that a neural network or machine learning algorithm, as described herein, can learn of this manual labeling or correction and then recognize this situation next time and correctly recognize this part number or format or similar use, whether for same or different part number”; Tsibulevskiy para. [0084], “In a computer network environment, one user can perform a visual association process on a figure (or copy thereof), as described herein, such that the visually associated figure is then stored in the database and other users can be granted read access to the visually associated figure. Thus, other users can avoid repetition of the visual association process in order to improve efficiency and save computational resources”; Tsibulevskiy para. [0443], “correspondence (e.g., validation), which can include validation by an editor profile, whether by a corrective action or a non-correction, which can add more weight to that correspondence, then the pixel coordinates, the values, and the character patterns form a set of labeled data, which can be used to train or enhance training or supplement training of an OCR engine or supplement an OCR engine, which can be supervised or unsupervised”); and
based on the user change being consistent with the first one of the plurality of rules: (Tsibulevskiy paras. [0359]-[0360], “the second content items are searched in order to determine if any of the second content items match the first content item, as described herein. [0360] This form of matching can be based on content item similarity (or dissimilarity) based on various content attributes or data container attributes. For example, this form of matching can include matching a human face (first content item) in a photo to a plurality of human faces (second content items) in a plurality of photos based on facial similarity therebetween.”),
effect the user change to the value of the at least one identity attribute (Tsibulevskiy para. [0363], “The first human face, the second human face, and the third human face can be matched to each other based on similarity (or dissimilarity) and a corresponding mapping or index between the first human face, the second human face, and the third human face can be formed, as described herein, (or the first image, the second image, and the third image can be tagged, related, or associated with metadata informative of such matching content, as described herein).”); and
store the changed value of the at least one identity attribute as part of the digital identity for the user (Tsibulevskiy para. [0182], “this part number can be manually labeled or corrected so that a neural network or machine learning algorithm, as described herein, can learn of this manual labeling or correction and then recognize this situation next time and correctly recognize this part number or format or similar use, whether for same or different part number”; Tsibulevskiy para. [0443], “correspondence (e.g., validation), which can include validation by an editor profile, whether by a corrective action or a non-correction, which can add more weight to that correspondence, then the pixel coordinates, the values, and the character patterns form a set of labeled data, which can be used to train or enhance training or supplement training of an OCR engine or supplement an OCR engine, which can be supervised or unsupervised”).
Tsibulevskiy does not disclose the underlined features of “store, in a blockchain data structure of the identity provider, the changed value of the at least one identity attribute as part of the digital identity for the user in the blockchain data structure”.
However, in an analogous art, Iyer discloses an arrangement to store data in a blockchain data structure of an identity provider (Iyer para. [0016], “the IDP 102 is configured to then compile the digital identity for the user 114 and to store the digital identity in the ledger data structure 110 associated with the IDP 102. As such, the ledger data structure 110 includes the user's digital identity and other digital identities for other users, and corresponding certification records therefore (together or separately). In this exemplary embodiment, the ledger data structure 110 includes a block chain data structure”).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Tsibulevskiy to include, as taught by Iyer, an arrangement to store data in a blockchain data structure of an identity provider. Motivation for modifying would have been to include the well-known use of an arrangement allowing the combination to store data in blockchain, in order to increase versatility, attractiveness and broadened adoption of the Tsibulevskiy/Iyer combination within the security field.
Claims 5, 8-10, 13 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Tsibulevskiy et al. (“Tsibulevskiy”; US20220319219A1) in view of Iyer et al. (“Iyer”; US20190245693A1) and Potash et al. (“Potash”; US20150095987A1).
Per claim 5:. The Tsibulevskiy/Iyer combination discloses the method of claim 1. The Tsibulevskiy/Iyer combination does not disclose an arrangement wherein the identification information includes an authentication result, based on comparison of the image of the source physical document and an image of the user captured by the mobile device.
However, in an analogous art, Potash discloses an arrangement wherein the identification information includes an authentication of the user, based on an image of a physical document and an image of the user captured by the mobile device (Potash para. [0065], “image of a fingerprint on a driver's license, or a photograph in a passport”).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the Tsibulevskiy/Iyer combination to include, as taught by Potash, an arrangement wherein the identification information includes an authentication of the user, based on an image of a physical document and an image of the user captured by the mobile device. Motivation for modifying would have been to include the well-known use of physical document and user images for authentication, in order to increase versatility, attractiveness and broadened adoption of the Tsibulevskiy/Iyer/Potash combination within the security field.
Per claim 8: The Tsibulevskiy/Iyer combination discloses the method of claim 1. The Tsibulevskiy/Iyer combination does not disclose an arrangement wherein determining whether the change to the value of the at least one identity attribute is consistent with the first one of the plurality of rules includes: generating at least one score for the change to the value of the at least one identity attribute, based at least in part on the first one of the plurality of rules; and comparing the at least one score for the change to the value to a defined threshold; and wherein effecting the change to the at least one identity attribute includes effecting the change in response to the at least one score satisfying the defined threshold.
However, in an analogous art, Potash discloses an arrangement wherein determining whether the change to the value of the at least one identity attribute is consistent with the first one of the plurality of rules includes:
generating at least one score for the change to the value of the at least one identity attribute, based at least in part on the first one of the plurality of rules (Potash para. [0023], “The service authorization threshold reflects a level of verification required for access to the service. The service authorization threshold can be determined by a provider of service 104, which can be different than an operator of verification unit 106. When the base verification score meets the service authorization threshold, access to the service can be granted (operation 208).”); and
comparing the at least one score for the change to the value to a defined threshold (Potash Abstract, “the base verification score is compared with a service authorization threshold associated with the service.”); and
wherein effecting the change to the at least one identity attribute includes effecting the change in response to the at least one score satisfying the defined threshold (Potash Abstract, “When the base verification score meets the service authorization threshold, access is granted to the service.”).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Tsibulevskiy to include, as taught by Potash, an arrangement wherein determining whether the change to the value of the at least one identity attribute is consistent with the first one of the plurality of rules includes: generating at least one score for the change to the value of the at least one identity attribute, based at least in part on the first one of the plurality of rules; and comparing the at least one score for the change to the value to a defined threshold; and wherein effecting the change to the at least one identity attribute includes effecting the change in response to the at least one score satisfying the defined threshold. Motivation for modifying would have been to include the well-known use of an arrangement allowing a user to easily change attributes when obeying a rule based on determination of a score and a threshold, in order to increase versatility, attractiveness and broadened adoption of the Tsibulevskiy/Potash combination within the security field.
Per claim 9: The Tsibulevskiy/Potash combination discloses the method of claim 8. Potash further discloses an arrangement wherein generating the at least one score includes combining a threat score for the change to the value of the at least one identity attribute (Potash para. [0083], “the difference between the base and session verification scores may indicate a defect with one or more identification features, or an attempt at identity fraud”) and at least one score relating to a circumstance associated with the at least one change (Potash para. [0086], “The session verification score can be compared to the base verification score, and to an authorization threshold”; [Note: The “authorization threshold” is interpreted as the claimed “circumstance”]).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the Tsibulevskiy/Potash combination to include, as further taught by Potash, an arrangement wherein generating the at least one score includes combining a threat score for the change to the value of the at least one identity attribute and at least one score relating to a circumstance associated with the at least one change. Motivation for modifying would have been to include the well-known use of an arrangement allowing a user to easily change attributes when obeying a rule based on determination of a score of two situations of interest, in order to increase versatility, attractiveness and broadened adoption of the Tsibulevskiy/Potash combination within the security field.
Per claim 10: The Tsibulevskiy/Potash combination discloses the method of claim 9. Potash further discloses an arrangement wherein the at least one score relating to the circumstance associated with the at least one change is selected from a group consisting of a mitigation score and an escalation score (Potash para. [0092], “Different service providers may determine different authorization thresholds required for access to a service. For example, a bank may require a higher level of verification, and concomitantly may impose a higher authorization threshold, than a gym.”).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to further modify the Tsibulevskiy/Potash combination to include, as taught by Potash, an arrangement wherein the at least one score relating to the circumstance associated with the at least one change is selected from a group consisting of a mitigation score and an escalation score. Motivation for modifying would have been to include the well-known use of an arrangement allowing a user to easily change attributes when obeying a rule considering a migration score and/or an escalation score, in order to increase versatility, attractiveness and broadened adoption of the Tsibulevskiy/Potash combination within the security field.
Per claim 13: Tsibulevskiy discloses the system of claim 12. Tsibulevskiy does not disclose an arrangement wherein the source includes a physical document.
However, in an analogous art, Potash discloses an arrangement wherein the source includes a physical document (Potash para. [0024], “a document or identification number from a document, such as a driver's license, social security number, a passport”).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify Tsibulevskiy to include, as taught by Potash, an arrangement wherein the source includes a physical document. Motivation for modifying would have been to include the well-known use of physical document for authentication, in order to increase versatility, attractiveness and broadened adoption of the Tsibulevskiy/Potash combination within the security field.
Per claim 15: Tsibulevskiy discloses the system of claim 12. Tsibulevskiy does not disclose an arrangement wherein the computing device, in determining whether the change to the value of the at least one identity attribute is consistent with the first one of the plurality of rules includes, is configured to: generate at least one score for the change to the value of the at least one identity attribute, based at least in part on the first one of the plurality of rules ; and compare the at least one score for the change to the value to a defined threshold; and wherein the computing device, in effecting the change to the value of the at least one identity attribute, is configured to effect the change to the value in response to the at least one score satisfying the defined threshold.
However, in an analogous art, Potash discloses an arrangement wherein the computing device, in determining whether the change to the value of the at least one identity attribute is consistent with the first one of the plurality of rules includes, is configured to:
generate at least one score for the change to the value of the at least one identity attribute, based at least in part on the first one of the plurality of rules (Potash para. [0023], “The service authorization threshold reflects a level of verification required for access to the service. The service authorization threshold can be determined by a provider of service 104, which can be different than an operator of verification unit 106. When the base verification score meets the service authorization threshold, access to the service can be granted (operation 208).”); and
compare the at least one score for the change to the value to a defined threshold (Potash Abstract, “the base verification score is compared with a service authorization threshold associated with the service.”); and
wherein the computing device, in effecting the change to the value of the at least one identity attribute, is configured to effect the change to the value in response to the at least one score satisfying the defined threshold (Potash Abstract, “When the base verification score meets the service authorization threshold, access is granted to the service.”).
It would have been prima facie obvious to one of ordinary skill in the art before the effective filing date of the claimed invention, to modify the Tsibulevskiy/Iyer combination to include, as taught by Potash, an arrangement wherein the computing device, in determining whether the change to the value of the at least one identity attribute is consistent with the first one of the plurality of rules includes, is configured to: generate at least one score for the change to the value of the at least one identity attribute, based at least in part on the first one of the plurality of rules ; and compare the at least one score for the change to the value to a defined threshold; and wherein the computing device, in effecting the change to the value of the at least one identity attribute, is configured to effect the change to the value in response to the at least one score satisfying the defined threshold. Motivation for modifying would have been to include the well-known use of an arrangement allowing a user to easily change attributes when obeying a rule based on determination of a score and a threshold, in order to increase versatility, attractiveness and broadened adoption of the Tsibulevskiy/Iyer/Potash combination within the security field.
Conclusion
Applicant’s amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Paul J Skwierawski whose telephone number is (571)272-2642. The examiner can normally be reached 6:00am-3:30pm weekdays.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisory primary examiner (SPE) Luu Pham can be reached on (571) 270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Paul Skwierawski/
Patent Examiner, Art Unit 2439
/LUU T PHAM/Supervisory Patent Examiner, Art Unit 2439