DETAILED ACTION
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claim 20 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 20 requires that the objects in the images are vehicles while the parent claim 7 requires that the objects are persons. As these are mutually exclusive object types it is not clear what the metes and bounds are of claim 20.
Claims 5-6, and 9 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claims recite “the second image” but prior antecedent basis only exists for the term “one or more second images” (plural)
Claims 20 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claims recites “the one or more first and second image” but prior antecedent basis only exists for the term “second images” (plural)
Claims 14 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claims recites “a virtual timeline” and then recites “the virtual timing line” which is improper antecedent basis. It appears the term “timeline” should be replaced with “timing line”.
Claims 15 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claim recites “the virtual timing line” without any prior antecedent basis.
Claims 14 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The claim recites “the timing information” without any prior antecedent basis.
Claim Objections
Claims 1, 14 and 15 objected to because of the following informalities: Each of these claims contains the following issue twice: “object associated a visual identification”. The term “with” should be added after “associated”. Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claim 17 is rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
The claims are drawn to a “computer program”, the full scope of which per se covers both transitory and non-transitory media and signals. Transitory forms of signal transmission do not constitute a statutory process, machine, manufacture or composition of matter and is thus non-statutory (see MPEP § 2106). Applicant’s originally filed specification does not explicitly limit the “computer program” to only non-transitory embodiments.
Examiner’s Note
Examiner provides the following note regarding the use of the term “if” in the claims: Limitations contingent on a conditional which the claim language does not require do not need to be addressed under broadest reasonable interpretation to show that the prior art teaches the claim requirements. See MPEP 2111.04 (II). Examiner suggests Applicant replace with non-contingent language such as “in response to” or “when”.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-13 and 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Druihle (US PGPub 2021/0034877)
Regarding claim 1, Druihle discloses a method for timing and identifying objects participating in a sports event comprising: (Druihle teaches a system for imaging runners at multiple positions along a track in order to identify their bib numbers or, if not legible in the image, identify their visual signatures in order to correlate to other images with visible bib numbers.)
receiving by a server system, a first image information associated with one or more first images captured by a first camera system of a first timing system located at a first position along a sports track, the one or more first images comprising objects associated a visual identification marker or identification code participating in the sports event passing a virtual timing line, the first image information comprising visual information of at least a first object for which a passing time is determined based on the one or more first images but which cannot be identified based on the one or more first images; (¶ 0059 teaches identifying runners by their visual signatures which is used when the bib numbers/worn numbers are not able to be identified. ¶ 0063 teaches the process of using the visual signature when the bib number cannot be reliably identified. ¶ 0064 teaches that this happens at the finish line of the race. See server at ¶ 0044 and 0046)
receiving or retrieving by the server system, second image information associated with one or more second images captured by a second camera system located at a different position than the first position, the one or more second images comprising objects participating in the sports event, the second image information comprising visual information about one or more objects that can be identified based on the one or more second images; and, (¶ 0052 and 0088 teaches detecting and recognizing the bib numbers/worn numbers on the runners. See Fig. 1 and ¶ 0043 for teaching that this happens at a second location.)
identifying by the server system, the first object, wherein the identifying includes: using the first image information and second image information to determine a second object associated a visual identification marker or code in the one or more second images that matches the first object, the determining being based on first non-biometric object features associated with the first object and second non-biometric object features associated with the second object; and, if the second object is determined, identifying the first object based on the visual identification marker or code of the second object. (¶ 0089 teaches associating the identifier of the runner via the visual signature based on the previous detection of the bib number/worn number. ¶ 0063 teaches performing the visual signature identification from the runner object in the image to the matching object (reference visual signature) in the previous image via a Euclidian distance or the like. ¶ 0059 teaches that the visual signature is not using face/biometric information but rather their overall appearance like clothing.)
Druihle does not expressly disclose that all of its above-cited teachings on runner visual and text-based identification are expressly disclosed as occurring in the same embodiment. That is, despite the reference being clear that these functions are disclosed, there is no express disclosure that the details are all found in the same embodiment. Instead, the reference presents some of the individual detailed disclosures as ‘according to some embodiments.’ It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined the various teachings to provide a single system capable of the variety of tasks which are disclosed. In view of these teachings, this cannot be considered a non-obvious improvement over the prior art. Using known engineering design, no “fundamental” operating principle of the teachings are changed; they continue to perform the same functions as originally taught prior to being combined.
Regarding claim 2, the above combination discloses the method according to claim 1 wherein the first image information comprises at least part of one of the one or more first images comprising the first object or at least one picture of a first region of interest (ROI) in one of the one or more first images, the first ROI comprising at least part of the first object. (¶ 0074 and Fig. 5)
Regarding claim 3, the above combination discloses the method according to claim 2 wherein the first image information further comprises timing information indicative of a time instance the first object passes the virtual timing line, depth information indicative of a distance between the first camera system and the first object; and/or an identifier associated with the first ROI. (See ¶ 0066 timing information indicative of a time instance the first object passes the virtual timing line.)
Regarding claim 4, the above combination discloses the method according to claim 1 wherein the determining a second object in the one or more second images uses a re-identification algorithm, the re-identification algorithm comparing the first object with the objects in the one or more second images based on object features. (As above ¶ 0089 teaches associating the identifier of the runner via the visual signature based on the previous detection of the bib number/worn number. ¶ 0063 teaches performing the visual signature identification from the runner object in the image to the matching object (reference visual signature) in the previous image via a Euclidian distance or the like.)
Regarding claim 5, the above combination discloses the method according to claim 1 wherein the determining a second object in the one or more second images includes: determining one or more first object features associated with the first object; determining one or more second object features associated with objects in the second image; and, determining if one of the objects in the one or more second images matches the first object in the one or more first images based on the one or more first and second object features. (As above ¶ 0089 teaches associating the identifier of the runner via the visual signature based on the previous detection of the bib number/worn number. ¶ 0063 teaches performing the visual signature identification from the runner object in the image to the matching object (reference visual signature) in the previous image via a Euclidian distance or the like.)
Regarding claim 6, the above combination discloses the method according to claim 4 wherein the matching is based on a distance measure that is computed based on the first and second object features, the distance measure being indicative of a similarity between the first object and an object in the second image. (As above ¶ 0063 teaches performing the visual signature identification from the runner object in the image to the matching object (reference visual signature) in the previous image via a Euclidian distance or the like.)
Regarding claim 7, the above combination discloses the method according to claim 4 wherein objects in the one or more first and second images represent persons participating in the sports event and wherein the first and second object features define features of a person. (See rejection of claim 1)
Regarding claim 8, the above combination discloses the method according to claim 3 wherein the identification of the first object further includes; searching for a visual identification marker or a visual identification code based on visual information of the second object; and, (¶ 0052 and 0088 teach detecting and recognizing the bib numbers/worn numbers on the runners.)
if a visual identification marker or visual identification code is found, transforming the identification marker or identification code into identification information for linking the second object to an identity; and, associating the first object with the identification information; and, (¶ 0088 and 0061)
storing the identification information and the timing information of the first object in a database. (¶ 0057, 0062, 0063 and 0041.)
Regarding claim 9, the above combination discloses the method according to claim 3 wherein the second image information includes identification information associated with the second image; and, (¶ 0088 and 0061)
wherein the identification of the first object further includes; associating the first object with the identification information; and, storing the identification information and the timing information of the first object in a database. (¶ 0057, 0062, 0063 and 0041.)
Regarding claim 10, the above combination discloses the method according to claim 1 further comprising: receiving timing information and identification information associated with objects in the one or more first images that have been detected, timed and identified by the first timing system based on the one or more first images. (¶ 0045, 0062, 0063 and 0041.)
Regarding claim 11, the above combination discloses the method according to claim 1 wherein the second camera system includes a processor that is configured to determine visual information about one or more objects that are identified based on the one or more second images. (See rejection of claim 1).
Regarding claim 12, the above combination discloses the method according to claim 1, wherein the second camera system is part of a second timing system configured to determine passing times of objects participating in the sports event passing a virtual timing line. (See rejection of claim 1).
Regarding claim 13, the above combination discloses the method according to claim 1 wherein the first image information includes an image frame comprising the detected non-identified object or a ROI picture comprising the detected non-identified object that is cropped out of an image frame. (¶ 0049.)
Claim 15 is the system claim corresponding to the method of claim 1. ¶ 0043-0044 teaches the camera and computer systems. Remaining limitations are rejected similarly. See detailed analysis above.
Regarding claim 16, the above combination discloses the system according to claim 15, wherein the first timing system and the one or more second camera systems are configured to wirelessly communicate with the server system. (See ¶ 0044 and 0046)
Claim 17 is the computer program claim corresponding to the method of claim 1. ¶ 0043-0044 and 0026 teaches the computer system and compute program. Remaining limitations are rejected similarly. See detailed analysis above.
Regarding claim 18, the above combination discloses the system according to claim 16, wherein the first timing system and the one or more second camera systems forming a communications network. (See ¶ 0044 and 0046.)
Regarding claim 19, the above combination discloses the method according to claim 13 wherein a time stamp indicates a passing time of the detected non-identified object. (See ¶ 0041 and 0045.)
Regarding claim 20, the above combination discloses the method according to claim 7 wherein objects in the one or more first and second image represent vehicles participating in the sports event and wherein the first and second object features define features of a vehicle. (¶ 0002)
Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Guerra (US Pat. No. 6,433,817) in view of Druihle (US PGPub 2021/0034877)
Regarding claim 14, the above combination discloses the system for timing and identifying objects participating in a sports event comprising: (Guerra discloses a depth-based IR laser range finder for determining a winner at the finish line of a running race, see Abstract)
a camera system configured to capture images of a scene comprising objects on a sport track passing a virtual timeline at a first location along a race track; and, (Col. 3, ¶ 1 a CCD camera imager set up at a race finish line)
a computer connected to the camera system wherein the computer is configured to: (Col. 4. ¶ 2)
detect objects associated a visual identification marker or identification code in the images captured by the camera system; (Col. 3, ¶ 1, “the invention is an infra-red light detector, having an infra-red charge coupled device (IR CCD) circuit 17, and a visible light sensor 19”)
determine depth information associated with images, the depth information defining a relative distance between the camera system and detected objects; (Col. 4, ¶ 4)
determine passing times at which detected objects pass the virtual timing line based on the timing information and the depth information; (Col. 4, second paragraph from bottom and paragraph spanning cols. 4 and 5.)
if an object cannot be detected based on the images, generating image information, the image information comprising visual information of a non-identified object that cannot be identified; and, (Col. 5, ¶ 2)
In the field of running race image analysis Druihle teaches identifying detected objects based on a visual identification marker or identification code in the images; and, (Druihle teaches a system for imaging runners at multiple positions along a track in order to identify their bib numbers or, if not legible in the image, identify their visual signatures in order to correlate to other images with visible bib numbers, see Abstract and rejection of claim 1.)
transmit the image information and a passing time associated with the non- identified object to a server system which is configured to identify the non-identified object based on the image information and further image information associated with one or more further images of objects associated a visual identification marker or identification code participating in the sports event captured by a further camera system located at a different position than the first position. (¶ 0089 teaches associating the identifier of the runner via the visual signature based on the previous detection of the bib number/worn number. ¶ 0063 teaches performing the visual signature identification from the runner object in the image to the matching object (reference visual signature) in the previous image at a different location. See transmission to server at ¶ 0044 and 0046.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have combined Guerra’s running race image analysis with Druihle’s running race image analysis. Guerra teaches a depth-based system for running race finish line detection and timing. Druihle teaches a system for imaging runners at the finish line and earlier positions along a track in order to identify their bib numbers or, if not legible in the image, identify their visual signatures in order to correlate to other images with visible bib numbers. The express purpose here is to identify runners which may not be otherwise identifiable in every image. The combination constitutes the repeatable and predictable result of simply applying Druihle’s technique to be used in the way in which it was intended. This cannot be considered a non-obvious improvement in view of the relevant prior art here. Using known engineering design, no “fundamental” operating principle of the teachings are changed; they continue to perform the same functions as originally taught prior to being combined.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Raphael Schwartz whose telephone number is (571)270-3822. The examiner can normally be reached Monday to Friday 9am-5pm CT.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at (571) 272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RAPHAEL SCHWARTZ/ Examiner, Art Unit 2671