DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 04/14/2023 is/are compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Response to Arguments
In regards to Argument(s), Applicant(s) state(s) that, Botten does not teach or suggest "validating the human-readable text" (captured via OCR or otherwise) against a database. Botten decodes machine-readable barcode data and queries a database based on that decoded data-not based on captured human-readable text. The human-readable text in Botten is manually compared by the user (see paragraph [0034]: "manually compare the information ... with human-readable information appearing on a label"). In Botten, there is no electronic capture or automated validation of the human-readable text. For all these reasons, the Applicant submits that the rejection of claim 1 is improper and should be withdrawn and that claim 1 is allowable over the cited references, (Emphasis added, Remarks, page 8-9).
Applicant’s arguments have been considered but are moot in view of the new ground(s) of rejection in view of Greyshock (US 2008/0300794 A1) in view of Tripuraneni et al (US 2021/0192202 A1).
Office Action Summary
Claim(s) 1-4, 9-11, 13-15, 21-25, 27-28, and 30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Greyshock (US 2008/0300794 A1) in view of Tripuraneni et al (US 2021/0192202 A1).
Claim(s) 5-7, 12, 16-19, 26, 29, and 31-33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Greyshock (US 2008/0300794 A1) in view of Tripuraneni et al (US 2021/0192202 A1), further in view of Gholami et al (US 2022/0068501 A1) .
Claim(s) 8 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Greyshock (US 2008/0300794 A1) in view of Tripuraneni et al (US 2021/0192202 A1), further in view of Holmes et al (US 2011/0054668 A1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 1-4, 9-11, 13-15, 21-25, 27-28, and 30 is/are rejected under 35 U.S.C. 103 as being unpatentable over Greyshock (US 2008/0300794 A1) in view of Tripuraneni et al (US 2021/0192202 A1).
Regarding claim(s) 1, Greyshock teaches a method of capturing human-readable text displayed on a container and verifying against a known data set using a device, said method comprising the device:
capturing identification information associated with the container (Paragraph [0019]: “(1) capturing identification information associated with a unit dose package”);
electronically capturing the human-readable text on the container (Paragraph [0019]: “(2) determining, based at least in part on the identification information, a location at which human-readable text is displayed on the unit dose package; and (3) electronically capturing the human-readable text at the determined location”).
Greyshock fails to teach validating the human-readable text on the container against a verified database of valid values, linked to the identification information. However, Tripuraneni teaches validating the human-readable text on the container against a verified database of valid values, linked to the identification information (Paragraph [0003]: “obtain, based on the recognized text, validation data, the validation data including verification text; determine whether the recognized text is verified based on the verification text”; Paragraph [0012]: “the text recognition platform may obtain validation data based on a portion of the recognized text and compare portions of the validation data to portions of the recognized text, e.g., in a manner designed to verify that the recognized text is accurate”; Paragraph [0018]: “obtain validation data, e.g., from the validation data storage device […] By comparing the validation data to the recognized text, the text recognition platform may determine whether the recognized text is accurate”; and Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”).
Therefore, it would have been obvious to one of ordinary skill in the art to combine Greyshock and Tripuraneni before the effective filing date of the claimed invention. The motivation for this combination of references would have been a predictable improvement to integrate the validation routine of Tripuraneni into Greyshock container-identification system to enhance reliability and reduce OCR mis-reads, since both references address automated text recognition and database correlation in the same field. The combination merely adds a known verification function to an existing architecture without changing its basic operation. This motivation for the combination of Greyshock and Tripuraneni is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim(s) 2, Greyshock as modified by Tripuraneni teaches the method of claim 1, where Greyshock teaches wherein capturing the identification information comprises reading an identification code displayed on the container (Paragraph [0020]: “In one exemplary embodiment, capturing identification information (e.g., a medication type or a manufacturer associated with the unit dose package) involves reading an identification code displayed on the unit dose package”).
Regarding claim(s) 3, Greyshock as modified by Tripuraneni teaches the method of claim 2, where Greyshock teaches wherein reading the identification code comprises using optical character recognition (OCR) to read a package label on the container (Paragraph [0077]: “Once the location and, in many instances, the format of the human-readable text is determined, according to one exemplary embodiment, the human-readable text may then be electronically captured and translated into machine-readable text using, for example, optical character recognition (OCR)”).
Regarding claim(s) 4, Greyshock as modified by Tripuraneni teaches the method of claim 1, where Greyshock teaches wherein capturing the identification information comprises scanning a machine-readable code on the container (Paragraph [0009]: “Patient-specific containers (e.g., drawers or bins) displaying barcodes that include the corresponding patient's unique identification code are placed on a conveyer belt associated with the automated system”).
Regarding claim(s) 9, Greyshock as modified by Tripuraneni teaches the method of claim 1, where Tripuraneni teaches wherein, after validating the human-readable text on the container, providing visual feedback to a human operator or electronic feedback to a machine that the container has been successfully identified and the human-readable text has been verified (Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”).
Regarding claim(s) 10, Greyshock as modified by Tripuraneni teaches the method of claim 1, where Greyshock teaches wherein the human-readable text comprises at least one of an expiration date and a lot number associated with the container (Paragraph [0070]: “In addition to reading the identification code displayed on the unit dose blister, it is also often desirable to be able to electronically capture information conveyed in various types of human-readable text or codes that are similarly displayed on the unit dose blister, such as an expiration date or lot number associated with the medication”).
Regarding claim(s) 11, Greyshock as modified by Tripuraneni teaches the method of claim 1, where Greyshock teaches wherein electronically capturing the human-readable text comprises translating the human-readable text into machine-readable text using optical character recognition (Paragraph [0079]: “According to exemplary embodiments of the present invention, any of the foregoing procedures (i.e., determining the location and/or format of human-readable text, scanning a unit dose blister at the determined location to capture an image of the human-readable text, or translating the human-readable text using OCR) may be implemented by a processor, such as the controller discussed below, operating under the control of software, which controls the reader, the scanner, and OCR”);
Where Tripuraneni teaches each valid value in the verified database corresponds to machine-readable text (Paragraph [0018]: “obtain validation data, e.g., from the validation data storage device […] By comparing the validation data to the recognized text, the text recognition platform may determine whether the recognized text is accurate”; and Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”); and
validating the human-readable text on the container comprises confirming that (i) the machine-readable text corresponding to the human-readable text on the container matches (ii) the machine-readable text corresponding to a valid value in the verified database (Paragraph [0012]: “the text recognition platform may obtain validation data based on a portion of the recognized text and compare portions of the validation data to portions of the recognized text, e.g., in a manner designed to verify that the recognized text is accurate”; Paragraph [0018]: “obtain validation data, e.g., from the validation data storage device […] By comparing the validation data to the recognized text, the text recognition platform may determine whether the recognized text is accurate”; and Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”).
Regarding claim(s) 13, Greyshock teaches a system for capturing human-readable text displayed on a container and verifying against a known data set, said system comprising:
an image capture device configured to capture images of the container (Paragraph [0019]: “(1) capturing identification information associated with a unit dose package”);
a processor (Figure 9: “Processor 910”) in communication with the image capture device (Paragraph [0024]: “a processor in communication with the image capture device” a memory in communication with the processor and storing an application executable by the processor. ); and
a memory (Figure 9: “Memory 920”) in communication with the processor, said memory storing an application executable by the processor, wherein the processor is configured, upon execution of the application (Paragraph [0024]: “a memory in communication with the processor and storing an application executable by the processor”), to (i) determine, based at least in part on identification information associated with the container, human-readable text associated with the identification information (Paragraph [0019]: “(2) determining, based at least in part on the identification information, a location at which human-readable text is displayed on the unit dose package; and (3) electronically capturing the human-readable text at the determined location”).
Greyshock fails to teach (ii) to validate the human-readable information using a verified database of valid values. However, Tripuraneni teach (ii) to validate the human-readable text using a verified database of valid values (Paragraph [0003]: “obtain, based on the recognized text, validation data, the validation data including verification text; determine whether the recognized text is verified based on the verification text”; Paragraph [0012]: “the text recognition platform may obtain validation data based on a portion of the recognized text and compare portions of the validation data to portions of the recognized text, e.g., in a manner designed to verify that the recognized text is accurate”; Paragraph [0018]: “obtain validation data, e.g., from the validation data storage device […] By comparing the validation data to the recognized text, the text recognition platform may determine whether the recognized text is accurate”; and Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”).
Therefore, it would have been obvious to one of ordinary skill in the art to combine Greyshock and Tripuraneni before the effective filing date of the claimed invention. The motivation for this combination of references would have been a predictable improvement to integrate the validation routine of Tripuraneni into Greyshock container-identification system to enhance reliability and reduce OCR mis-reads, since both references address automated text recognition and database correlation in the same field. The combination merely adds a known verification function to an existing architecture without changing its basic operation. This motivation for the combination of Greyshock and Tripuraneni is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim(s) 14, Greyshock as modified by Tripuraneni teaches the system of claim 13, where Tripuraneni teaches further comprising a screen configured to provide visual feedback to an operator of the system (Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”).
Regarding claim(s) 15, Greyshock as modified by Tripuraneni teaches the system of claim 13, where Greyshock teaches wherein the processor is further configured, upon execution of the application, to capture identification information associated with the container from an identification code displayed on the container (Paragraph [0020]: “In one exemplary embodiment, capturing identification information (e.g., a medication type or a manufacturer associated with the unit dose package) involves reading an identification code displayed on the unit dose package”).
Regarding claim(s) 21, Greyshock as modified by Tripuraneni teaches the system of claim 13, where Tripuraneni teaches wherein, after validating the human-readable text on the container, the processor provides visual feedback to a human operator or electronic feedback to a machine that the container has been successfully identified and the human-readable text has been verified (Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”).
Regarding claim(s) 22, Greyshock as modified by Tripuraneni teaches the system of claim 13, where Greyshock teaches wherein the valid human-readable information associated with the identification comprises at least one of an expiration date and a lot number associated with the container (Paragraph [0070]: “In addition to reading the identification code displayed on the unit dose blister, it is also often desirable to be able to electronically capture information conveyed in various types of human-readable text or codes that are similarly displayed on the unit dose blister, such as an expiration date or lot number associated with the medication”).
Regarding claim(s) 23, Greyshock teaches an apparatus for capturing human-readable text displayed on a container, said apparatus comprising:
means for capturing identification information associated with the container (Paragraph [0019]: “(1) capturing identification information associated with a unit dose package”);
means for electronically capturing the human-readable text on the container (Paragraph [0019]: “(2) determining, based at least in part on the identification information, a location at which human-readable text is displayed on the unit dose package; and (3) electronically capturing the human-readable text at the determined location”).
Greyshock fails to teach means for validating the human-readable text on the container against a known data set of valid values, linked to the identification information. However, Tripuraneni teach means for validating the human-readable text on the container against a known data set of valid values, linked to the identification information (Paragraph [0003]: “obtain, based on the recognized text, validation data, the validation data including verification text; determine whether the recognized text is verified based on the verification text”; Paragraph [0012]: “the text recognition platform may obtain validation data based on a portion of the recognized text and compare portions of the validation data to portions of the recognized text, e.g., in a manner designed to verify that the recognized text is accurate”; Paragraph [0018]: “obtain validation data, e.g., from the validation data storage device […] By comparing the validation data to the recognized text, the text recognition platform may determine whether the recognized text is accurate”; and Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”).
Therefore, it would have been obvious to one of ordinary skill in the art to combine Greyshock and Tripuraneni before the effective filing date of the claimed invention. The motivation for this combination of references would have been a predictable improvement to integrate the validation routine of Tripuraneni into Greyshock container-identification system to enhance reliability and reduce OCR mis-reads, since both references address automated text recognition and database correlation in the same field. The combination merely adds a known verification function to an existing architecture without changing its basic operation. This motivation for the combination of Greyshock and Tripuraneni is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim(s) 24, Greyshock as modified by Tripuraneni teaches the apparatus of claim 23, where Greyshock teaches wherein the human-readable text comprises at least one of an expiration date and a lot number associated with the container (Paragraph [0070]: “In addition to reading the identification code displayed on the unit dose blister, it is also often desirable to be able to electronically capture information conveyed in various types of human-readable text or codes that are similarly displayed on the unit dose blister, such as an expiration date or lot number associated with the medication”).
Regarding claim(s) 25, Greyshock as modified by Tripuraneni teaches the apparatus of claim 23, where Greyshock teaches wherein the means for electronically capturing the human-readable text comprises means for translating the human-readable text into machine-readable text using optical character recognition (Paragraph [0079]: “According to exemplary embodiments of the present invention, any of the foregoing procedures (i.e., determining the location and/or format of human-readable text, scanning a unit dose blister at the determined location to capture an image of the human-readable text, or translating the human-readable text using OCR) may be implemented by a processor, such as the controller discussed below, operating under the control of software, which controls the reader, the scanner, and OCR”);
Where Tripuraneni teaches each valid value in the verified database corresponds to machine-readable text (Paragraph [0018]: “obtain validation data, e.g., from the validation data storage device […] By comparing the validation data to the recognized text, the text recognition platform may determine whether the recognized text is accurate”; and Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”); and
the means for validating the human-readable text on the container comprises confirming that (i) the machine-readable text corresponding to the human-readable text on the container matches (ii) the machine-readable text corresponding to a valid value in the verified database (Paragraph [0012]: “the text recognition platform may obtain validation data based on a portion of the recognized text and compare portions of the validation data to portions of the recognized text, e.g., in a manner designed to verify that the recognized text is accurate”; Paragraph [0018]: “obtain validation data, e.g., from the validation data storage device […] By comparing the validation data to the recognized text, the text recognition platform may determine whether the recognized text is accurate”; and Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”).
Regarding claim(s) 27, Greyshock teaches a non-transitory computer program product (Figure 9) for capturing human-readable text displayed on a container, wherein the computer program product comprises at least one computer-readable storage medium having computer-readable program code portions stored therein, the computer-readable program code portions comprising:
a first executive portion for identifying areas of interest on an image of the container (Paragraph [0019]: “capturing human-readable text displayed on a unit dose package”);
a second executable portion for directing the capture of identification information associated with the container (Paragraph [0019]: “(1) capturing identification information associated with a unit dose package”);
a third executable portion for directing the electronic capture of the human-readable text (Paragraph [0019]: “(2) determining, based at least in part on the identification information, a location at which human-readable text is displayed on the unit dose package; and (3) electronically capturing the human-readable text at the determined location”).
Greyshock fails to teach a fourth executable portion for validating the human-readable text, wherein the container identification information is used to reference a database of valid human-readable values for the container. However, Tripuraneni teach a fourth executable portion for validating the human-readable text, wherein the container identification information is used to reference a database of valid human-readable values for the container (Paragraph [0003]: “obtain, based on the recognized text, validation data, the validation data including verification text; determine whether the recognized text is verified based on the verification text”; Paragraph [0012]: “the text recognition platform may obtain validation data based on a portion of the recognized text and compare portions of the validation data to portions of the recognized text, e.g., in a manner designed to verify that the recognized text is accurate”; Paragraph [0018]: “obtain validation data, e.g., from the validation data storage device […] By comparing the validation data to the recognized text, the text recognition platform may determine whether the recognized text is accurate”; and Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”).
Therefore, it would have been obvious to one of ordinary skill in the art to combine Greyshock and Tripuraneni before the effective filing date of the claimed invention. The motivation for this combination of references would have been a predictable improvement to integrate the validation routine of Tripuraneni into Greyshock container-identification system to enhance reliability and reduce OCR mis-reads, since both references address automated text recognition and database correlation in the same field. The combination merely adds a known verification function to an existing architecture without changing its basic operation. This motivation for the combination of Greyshock and Tripuraneni is supported by KSR exemplary rationale (G) Some teaching, suggestion, or motivation in the prior art that would have led one of ordinary skill to modify the prior art reference or to combine prior art reference teachings to arrive at the claimed invention. MPEP 2141 (III).
Regarding claim(s) 28, Greyshock as modified by Tripuraneni teaches the computer program product of claim 27, where Greyshock teaches wherein:
the third executable portion is configured to translate the human-readable text into machine-readable text using optical character recognition (Paragraph [0079]: “According to exemplary embodiments of the present invention, any of the foregoing procedures (i.e., determining the location and/or format of human-readable text, scanning a unit dose blister at the determined location to capture an image of the human-readable text, or translating the human-readable text using OCR) may be implemented by a processor, such as the controller discussed below, operating under the control of software, which controls the reader, the scanner, and OCR”);
Where Tripuraneni teaches each valid value in the verified database corresponds to machine-readable text (Paragraph [0018]: “obtain validation data, e.g., from the validation data storage device […] By comparing the validation data to the recognized text, the text recognition platform may determine whether the recognized text is accurate”; and Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”); and
validating the human-readable text on the container comprises confirming that (i) the machine-readable text corresponding to the human-readable text on the container matches (ii) the machine-readable text corresponding to a valid value in the verified database (Paragraph [0012]: “the text recognition platform may obtain validation data based on a portion of the recognized text and compare portions of the validation data to portions of the recognized text, e.g., in a manner designed to verify that the recognized text is accurate”; Paragraph [0018]: “obtain validation data, e.g., from the validation data storage device […] By comparing the validation data to the recognized text, the text recognition platform may determine whether the recognized text is accurate”; and Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”).
Regarding claim 30, Greyshock as modified by Tripuraneni teaches the system of claim 13, where Tripuraneni teaches wherein:
each valid value in the verified database corresponds to machine-readable text (Paragraph [0018]: “obtain validation data, e.g., from the validation data storage device […] By comparing the validation data to the recognized text, the text recognition platform may determine whether the recognized text is accurate”; and Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”); and
the processor (Figure 3; and Paragraph [0035]) is configured to:
where Greyshock teaches translate captured human-readable text into machine-readable text using optical character recognition (Paragraph [0079]: “According to exemplary embodiments of the present invention, any of the foregoing procedures (i.e., determining the location and/or format of human-readable text, scanning a unit dose blister at the determined location to capture an image of the human-readable text, or translating the human-readable text using OCR) may be implemented by a processor, such as the controller discussed below, operating under the control of software, which controls the reader, the scanner, and OCR”);
where Tripuraneni teaches validate the human-readable text on the container by confirming that (i) the machine- readable text corresponding to the human-readable text on the container matches (ii) the machine-readable text corresponding to a valid value in the verified database (Paragraph [0012]: “the text recognition platform may obtain validation data based on a portion of the recognized text and compare portions of the validation data to portions of the recognized text, e.g., in a manner designed to verify that the recognized text is accurate”; Paragraph [0018]: “obtain validation data, e.g., from the validation data storage device […] By comparing the validation data to the recognized text, the text recognition platform may determine whether the recognized text is accurate”; and Paragraph [0066]: “obtain validation data based on the recognized text. The validation data may include, for example, verification text […] from an address database, […] to confirm that the recognized […] is a valid and accurate […] use a recognized account identifier to obtain information […] from a database, and compare […] to determine whether the account identifier is accurate”).
Claim(s) 5-7, 12, 16-19, 26, 29, and 31-33 is/are rejected under 35 U.S.C. 103 as being unpatentable over Greyshock (US 2008/0300794 A1) in view of Tripuraneni et al (US 2021/0192202 A1), further in view of Gholami et al (US 2022/0068501 A1) .
Regarding claim(s) 5, Greyshock as modified by Tripuraneni teaches the method of claim 4, but do not specifically teach wherein scanning the machine-readable code comprises using a neural network to detect an area on the container having the machine-readable code. However, Gholami teach wherein scanning the machine-readable code comprises using a neural network to detect an area on the container having the machine-readable code (Paragraph [0056]: “In this disclosure, scanning the tags means automated processing of an image or series of images to read barcodes and/or using computer vision algorithms including optical character recognition an neural network approaches for character recognition and other methods to read alphanumeric characters using image data to extract written information on the sensitive medication packs such as NDC, expiration date, serial number, and lot number, among others”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify a scanning the machine-readable code comprises using a neural network to detect an area on the container having the machine-readable code section of Greyshock and Tripuraneni to incorporate the use of a scanning the machine-readable code comprises using a neural network to detect an area on the container having the machine-readable code section of Gholami and one of ordinary skill in the art would have recognized that the results of the combination were predictable. One could look to Gholami to include a scanning the machine-readable code comprises using a neural network to detect an area on the container having the machine-readable code section to detect alphanumeric characters and automatically extract information from printed labels and hard copy material.
Regarding claim(s) 6, Greyshock as modified by Tripuraneni teaches the method of claim 1, but do not specifically teach wherein capturing the identification information comprises using visual object detection to identify the container based on its appearance. However, Gholami teach wherein capturing the identification information comprises using visual object detection to identify the container based on its appearance (Paragraph [0182]: “A computer vision software is used to analyze a picture and confirm if the container such as the pill pack or ampule or vial is empty”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify a capturing the identification information comprises using visual object detection to identify the container based on its appearance section of Greyshock and Tripuraneni to incorporate the use of a capturing the identification information comprises using visual object detection to identify the container based on its appearance section of Gholami and one of ordinary skill in the art would have recognized that the results of the combination were predictable. One could look to Gholami to include a capturing the identification information comprises using visual object detection to identify the container based on its appearance section to analyze a picture and confirm status of the container.
Regarding claim(s) 7, Greyshock as modified by Tripuraneni teaches the method of claim 1, but do not specifically teach wherein locating the human-readable text on the container utilizes a neural network to detect an area on the container having the human-readable text to OCR. However, Gholami teach wherein locating the human-readable text on the container utilizes a neural network to detect an area on the container having the human-readable text to OCR (Paragraph [0056]: “In this disclosure, scanning the tags means automated processing of an image or series of images to read barcodes and/or using computer vision algorithms including optical character recognition an neural network approaches for character recognition and other methods to read alphanumeric characters using image data to extract written information on the sensitive medication packs such as NDC, expiration date, serial number, and lot number, among others”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify a locating the human-readable text on the container utilizes a neural network to detect an area on the container having the human-readable text to OCR section of Greyshock and Tripuraneni to incorporate the use of a locating the human-readable text on the container utilizes a neural network to detect an area on the container having the human-readable text to OCR section of Gholami and one of ordinary skill in the art would have recognized that the results of the combination were predictable. One could look to Gholami to include a locating the human-readable text on the container utilizes a neural network to detect an area on the container having the human-readable text to OCR section to detect alphanumeric characters and automatically extract information from printed labels and hard copy material.
Regarding claim(s) 12, Greyshock as modified by Tripuraneni teaches the method of claim 1, where Greyshock teaches wherein electronically capturing the human-readable text comprises translating the human-readable text on the container processed with optical character recognition (Paragraph [0079]: “According to exemplary embodiments of the present invention, any of the foregoing procedures (i.e., determining the location and/or format of human-readable text, scanning a unit dose blister at the determined location to capture an image of the human-readable text, or translating the human-readable text using OCR) may be implemented by a processor, such as the controller discussed below, operating under the control of software, which controls the reader, the scanner, and OCR”).
Where Greyshock and Tripuraneni fails to teach to using neural network object detection to limit an area on an image of the container. However, Gholami teach to using neural network object detection to limit an area on an image of the container (Paragraph [0056]: “In this disclosure, scanning the tags means automated processing of an image or series of images to read barcodes and/or using computer vision algorithms including optical character recognition an neural network approaches for character recognition and other methods to read alphanumeric characters using image data to extract written information on the sensitive medication packs such as NDC, expiration date, serial number, and lot number, among others”; and Paragraph [0182]: “A computer vision software is used to analyze a picture and confirm if the container such as the pill pack or ampule or vial is empty”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify a using neural network object detection to limit an area on an image of the container section of Greyshock and Tripuraneni to incorporate the use of a using neural network object detection to limit an area on an image of the container section of Gholami and one of ordinary skill in the art would have recognized that the results of the combination were predictable. One could look to Gholami to include a using neural network object detection to limit an area on an image of the container section to detect alphanumeric characters and automatically extract information from printed labels and hard copy material. Furthermore, visual object detection to identify the container based on its appearance section to analyze a picture and confirm status of the container.
Regarding claim(s) 16, Greyshock as modified by Tripuraneni teaches the system of claim 15, but do not specifically teach wherein, in order to capture the identification information, the processor uses a neural network to detect an identification code on the container. However, Gholami teach wherein, in order to capture the identification information, the processor uses a neural network to detect an identification code on the container (Paragraph [0056]: “In this disclosure, scanning the tags means automated processing of an image or series of images to read barcodes and/or using computer vision algorithms including optical character recognition an neural network approaches for character recognition and other methods to read alphanumeric characters using image data to extract written information on the sensitive medication packs such as NDC, expiration date, serial number, and lot number, among others”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify a the processor uses a neural network to detect an identification code on the container section of Greyshock and Tripuraneni to incorporate the use of a the processor uses a neural network to detect an identification code on the container section of Gholami and one of ordinary skill in the art would have recognized that the results of the combination were predictable. One could look to Gholami to include a the processor uses a neural network to detect an identification code on the container section to detect alphanumeric characters and automatically extract information from printed labels and hard copy material.
Regarding claim(s) 17, Greyshock as modified by Tripuraneni teaches the system of claim 13, but do not specifically teach wherein the processor is further configured, upon execution of the application, to use visual object detection to identify the container based on its appearance. However, Gholami teach the processor is further configured, upon execution of the application, to use visual object detection to identify the container based on its appearance (Paragraph [0182]: “A computer vision software is used to analyze a picture and confirm if the container such as the pill pack or ampule or vial is empty”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify a capturing the identification information comprises using visual object detection to identify the container based on its appearance section of Greyshock and Tripuraneni to incorporate the use of a capturing the identification information comprises using visual object detection to identify the container based on its appearance section of Gholami and one of ordinary skill in the art would have recognized that the results of the combination were predictable. One could look to Gholami to include a capturing the identification information comprises using visual object detection to identify the container based on its appearance section to analyze a picture and confirm status of the container.
Regarding claim(s) 18, Greyshock as modified by Tripuraneni and Gholami teaches the system of claim 17, where Gholami teach wherein the processor uses a neural network to execute visual object detection (Paragraph [0182]: “A computer vision software is used to analyze a picture and confirm if the container such as the pill pack or ampule or vial is empty”).
Regarding claim(s) 19, Greyshock as modified by Tripuraneni teaches the system of claim 13, but do not specifically teach wherein the processor is further configured, upon execution of the application, to locate the human-readable text on the container utilizing a neural network. However, Gholami teach wherein the processor is further configured, upon execution of the application, to locate the human-readable text on the container utilizing a neural network (Paragraph [0056]: “In this disclosure, scanning the tags means automated processing of an image or series of images to read barcodes and/or using computer vision algorithms including optical character recognition an neural network approaches for character recognition and other methods to read alphanumeric characters using image data to extract written information on the sensitive medication packs such as NDC, expiration date, serial number, and lot number, among others”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify a locate the human-readable text on the container utilizing a neural network section of Greyshock and Tripuraneni to incorporate the use of a the processor uses a locate the human-readable text on the container utilizing a neural network section of Gholami and one of ordinary skill in the art would have recognized that the results of the combination were predictable. One could look to Gholami to include a locate the human-readable text on the container utilizing a neural network section to detect alphanumeric characters and automatically extract information from printed labels and hard copy material.
Regarding claim(s) 26, Greyshock as modified by Tripuraneni teaches the apparatus of claim 23, where Greyshock teaches wherein the means for electronically capturing the human-readable text further comprises means for translating the human-readable text on the container processed with optical character recognition (Paragraph [0079]: “According to exemplary embodiments of the present invention, any of the foregoing procedures (i.e., determining the location and/or format of human-readable text, scanning a unit dose blister at the determined location to capture an image of the human-readable text, or translating the human-readable text using OCR) may be implemented by a processor, such as the controller discussed below, operating under the control of software, which controls the reader, the scanner, and OCR”).
Where Greyshock and Tripuraneni fails to teach to using neural network object detection to limit an area on an image of the container. However, Gholami teach to using neural network object detection to limit an area on an image of the container (Paragraph [0056]: “In this disclosure, scanning the tags means automated processing of an image or series of images to read barcodes and/or using computer vision algorithms including optical character recognition an neural network approaches for character recognition and other methods to read alphanumeric characters using image data to extract written information on the sensitive medication packs such as NDC, expiration date, serial number, and lot number, among others”; and Paragraph [0182]: “A computer vision software is used to analyze a picture and confirm if the container such as the pill pack or ampule or vial is empty”).
Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date to modify a using neural network object detection to limit an area on an image of the container section of Greyshock and Tripuraneni to incorporate the use of a using neural network object detection to limit an area on an image of the container section of Gholami and one of ordinary skill in the art would have recognized that the results of the combination were predictable. One could look to Gholami to include a using neural network object detection to limit an area on an image of the container section to detect alphanumeric characters and automatically extract information from printed labels and hard copy material. Furthermore, visual object detection to identify the container based on its appearance section to analyze a picture and confirm status of the container.
Regarding claim(s) 29 and 31-33, Greyshock as modified by Tripuraneni teaches the method of claim 11, but do not specifically teach wherein the valid value in the verified database corresp