DETAILED ACTION
Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
2. The information disclosure statements (IDS) submitted on the following dates are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner: 09/03/2024.
Claim Interpretation
3. The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
4. The claims 1-10 in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: "a two-dimensional(2D) skeleton unit; a three-dimensional (3D) skeleton unit; a comparison unit” in claim 1; “an image correction unit” in claim 2; “a recognition unit” in claim 3; “a 3D point cloud unit” in claim 4; “a model of a 3D point cloud unit” in claim 5; “a 3D point cloud unit”, “3D skeleton unit”, “a matching module” in claim 6; “a 3D-to-2D unit”, “a curve comparison unit” in claim 7; “a computational block”, “a midline point calculation block” in claim 8; “an analysis module”, “a convex hull point calculation module”, “a defect point calculation module”, “a fish-mouth-point tail-fork-point calculation module” in claim 9; “an overall skeleton extraction module”, “a fish body determining block”, “a comparison block”, “a fish mouth point determining block”, “a fish tail skeleton endpoint determining block”, “a tail fork point determining block” in claim 10.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 112
5. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
6. Claims 1-10 rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. (Note: rejection applies to subsequent dependent claims).
Claim limitations "a two-dimensional(2D) skeleton unit; a three-dimensional (3D) skeleton unit; a comparison unit” in claim 1; “an image correction unit” in claim 2; “a recognition unit” in claim 3; “a 3D point cloud unit” in claim 4; “a model of a 3D point cloud unit” in claim 5; “a 3D point cloud unit”, “3D skeleton unit”, “a matching module” in claim 6; “a 3D-to-2D unit”, “a curve comparison unit” in claim 7; “an analysis module”, “a convex hull point calculation module”, “a defect point calculation module”, “a fish-mouth-point tail-fork-point calculation module” in claim 9; “an overall skeleton extraction module” in claim 10, each limitation is a limitation which invokes 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function.
The specification provides no disclosure of a structure for each of the limitations "a two-dimensional(2D) skeleton unit; a three-dimensional (3D) skeleton unit; a comparison unit” in claim 1; “an image correction unit” in claim 2; “a recognition unit” in claim 3; “a 3D point cloud unit” in claim 4; “a model of a 3D point cloud unit” in claim 5; “a 3D point cloud unit”, “3D skeleton unit”, “a matching module” in claim 6; “a 3D-to-2D unit”, “a curve comparison unit” in claim 7; “an analysis module”, “a convex hull point calculation module”, “a defect point calculation module”, “a fish-mouth-point tail-fork-point calculation module” in claim 9; “an overall skeleton extraction module” in claim 10, either as a dedicated structure that performs the recited function or as a combination of a general purpose processor and an algorithm that enables it to perform the function. Throughout the specification, there are merely represented by labeled boxes in the figures and described only by their function within the detailed disclosure. As such, the scope of claims 1-10 cannot be determined.
Therefore, the claims 1-10 are indefinite and is rejected under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph.
Applicant may:
(a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph;
(b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(c) Amend the written description of the specification such that it clearly links the structure,
material, or acts disclosed therein to the function recited in the claim, without introducing any
new matter (35 U.S.C. 132(a)).
If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either:
(a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or
(b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function.
For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181.
Claim Rejections - 35 USC § 103
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
8. Claims 1-2, 4-5, 11-12 and 14-15 are rejected under 35 U.S.C. 103 as being unpatentable over Seyrek Pierre et al., (“Seyrek Pierre”) [US-2024/0029347-A1] in view of Kitagawa (“Kitagawa”) [US-2019/0277624-A1]
Regarding claim 1, Seyrek Pierre discloses a catch monitoring device (Seyrek Pierre- Fig. 1 and ¶0039, at least disclose A device 103 holding one or more cameras is submerged into the fish cage 101 […] The invention is not limited to observation or monitoring of fish, and more generally the observed objects may be referred to as aquatic animals 102), comprising:
a two-dimensional(2D) skeleton unit, configured to generate a 2D skeleton according to a 2D image (Seyrek Pierre- ¶0004, at least discloses obtaining one or more 2D images of the aquatic animal from one or more cameras configured to observe the aquatic environment, processing image data from one or more of the obtained 2D images to identify key points on or inside the aquatic animal, including occluded key points, and determine their locations in the one or more 2D images, and generating one or more 2D skeletons [generate a 2D skeleton according to a 2D image] represented as nodes connected by edges. Each node in a 2D skeleton corresponds to one identified key point; ¶0006, at least discloses one camera and in these cases only one 2D skeleton is generated; ¶0012, at least discloses at least one processor [a two-dimensional(2D) skeleton unit] configured to receive image data from the one or more cameras and to process the image data. The processing may include to identify key points on or inside the aquatic animal, including occluded key points, and their locations in the one or more 2D images, generate one or more 2D skeletons represented as nodes connected by edges);
a three-dimensional (3D) skeleton unit, configured to generate a 3D skeleton according to a 3D point cloud image (Seyrek Pierre- ¶0004, at least discloses For the nodes in the one or more 2D skeletons, estimated 3D positions are calculated, and from the estimated 3D positions of the nodes [3D point] of the one or more 2D skeletons, 3D coordinates of the nodes of 3D skeleton are determined and a 3D skeleton is generated [generate a 3D skeleton according to a 3D point] as a pre-defined structure of nodes connected by edges; ¶0012, at least discloses generating a three-dimensional skeleton representation of an aquatic animal, comprising a device with two open ends connected by a channel, and configured receive water from an aquatic environment through the channel, one or more cameras attached to the walls of the device and directed towards the interior of the device, at least one processor [a three-dimensional (3D) skeleton unit] configured to receive image data from the one or more cameras and to process the image data. The processing may include to identify key points on or inside the aquatic animal […] calculate estimated 3D positions for nodes in the one or more 2D skeletons, determine the 3D coordinates of the nodes of a 3D skeleton from the estimated 3D positions of the nodes of the one or more 2D skeletons, and generate the 3D skeleton as a pre-defined structure of nodes connected by edges; ¶0054, at least discloses The module downstream from the pre-processing module 501 is an image analysis module 502. This module may be implemented as one or more edge computing modules, or as a cloud service, or as a combination of edge and cloud computing where some tasks are performed near the cameras and additional processing is performed in the cloud [cloud image]); and
a comparison unit, coupled to the 2D skeleton unit and the 3D skeleton unit, configured to determine whether to trigger the catch monitoring device to output catch length, catch girth, according to the 2D skeleton and the 3D skeleton (Seyrek Pierre- ¶0014, at least discloses at least discloses the system includes only one camera, and the processor is further configured to calculate estimated 3D positions relative to a known position of the camera by defining a direction in 3D space from the known position of the camera to a position of a key point identified in a 2D image plane, and calculate a corresponding 3D position by matching possible positions along the defined direction and possible poses for the aquatic animal; Fig. 1 and ¶0039, at least disclose A device 103 holding one or more cameras is submerged into the fish cage 101 […] The invention is not limited to observation or monitoring of fish, and more generally the observed objects may be referred to as aquatic animals 102; ¶0052, at least discloses The dimensions of the device 103, both with respect to its overall size and with respect to its various components, depend primarily on the size of the aquatic animals it is intended to view; ¶0057, at least discloses an aquatic animal, in this case a fish, and a 3D skeleton which is generated from key points on or inside the fish. With an arrangement of cameras viewing the interior of the device 103 from a plurality of angles, it is possible to obtain images of the observed aquatic animal that essentially cover its entire circumference [catch girth] […] 3D information about the aquatic animal may then be derived based on a comparison of how various details or features of the animal are positioned relative to each other in images captured from different angles or points of view. These relationships make it possible to go from 2D skeletons generated from individual 2D images to 3D skeletons […] The generated 3D skeleton can be used to measure dimensions, pose, movement and more; ¶0069, at least discloses In step 707, after the 3D coordinates have been determined for each node in the 3D skeleton, the 3D skeleton as a whole can be generated and provided as output from the process. The skeleton will now comprise a complete set of pre-defined nodes and links, or edges, between them according to the defined structure of the 3D skeleton. The process of generating the 3D skeleton can now be terminated in step 708; ¶0074, at least discloses In step 803 a fingerprint is generated based on the relative lengths of the edges [catch length] in the 3D skeleton graph; ¶0081, at least discloses If abnormal behavior is detected, this may be registered, counted, an alarm may be triggered, or some other action may be initiated).
Seyrek Pierre does not explicitly disclose, but Kitagawa discloses
output catch weight (Kitagawa- ¶0032, at least discloses displaying, on a display device, captured images taken by capturing a fish being a target object to be measured; Fig. 3 and ¶0068, at least discloses a display device 26 that displays information; ¶0111, at least discloses a relation between length and weight that enables estimation of a weight of fish based on those lengths can be obtained, the analysis unit 33 may estimate the weight of fish based on the those calculated lengths).
Kitagawa further discloses
output catch length, catch girth (Kitagawa- Fig. 6 and ¶0032, at least disclose displaying, on a display device, captured images taken by capturing a fish being a target object to be measured; Figs. 8, 10-11 show a length of fish; Fig. 3 and an information processing device 20 includes a function of calculating a length of fish from captured images of a fish being a target object to be measured captured by a plurality of (two) cameras 40A and 40B as represented in FIG. 4A; ¶0003, at least discloses estimating a size of a fish using an image size (number of pixels)).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Seyrek Pierre to incorporate the teachings of Kitagawa, and apply the weight of fish into the Seyrek Pierre’s teachings in order to determine whether to trigger the catch monitoring device to output catch length, catch girth, or catch weight according to the 2D skeleton and the 3D skeleton.
Doing so would easily and accurately detect a length of an object to be measured based on a captured image.
Regarding claim 2, Seyrek Pierre in view of Kitagawa, discloses the catch monitoring device of claim 1, and discloses the device further comprising:
an image correction unit, coupled to the 2D skeleton unit and the 3D skeleton unit, configured to convert an image received by the catch monitoring device into a corrected image (Seyrek Pierre- ¶0012, at least discloses at least one processor configured to receive image data from the one or more cameras and to process the image data; Kitagawa- ¶0114, at least discloses image processing of correcting distortion of fish body due to fluctuation of water. Further, the information processing device 20 may perform image processing of correcting the captured image in consideration of a capturing condition such as a water depth, brightness, or the like of an object).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Seyrek Pierre to incorporate the teachings of Kitagawa, and apply image processing of correcting distortion of fish body into the Seyrek Pierre’s teachings in order to convert an image received by the catch monitoring device into a corrected image.
The same motivation that was utilized in the rejection of claim 1 applies equally to this claim.
Regarding claim 4, Seyrek Pierre in view of Kitagawa, discloses the catch monitoring device of claim 1, and discloses the device further comprising:
a 3D point cloud unit, coupled to the 3D skeleton unit, configured to generate the 3D point cloud image or a fin-retracted 3D point cloud image according to a species of an object and an extraction image corresponding to the object (Seyrek Pierre- ¶0012, at least discloses a device with two open ends connected by a channel, and configured receive water from an aquatic environment through the channel, one or more cameras attached to the walls of the device and directed towards the interior of the device, at least one processor [3D point cloud unit] configured to receive image data from the one or more cameras and to process the image data. The processing may include to identify key points on or inside the aquatic animal […] determine the 3D coordinates of the nodes of a 3D skeleton from the estimated 3D positions of the nodes of the one or more 2D skeletons, and generate the 3D skeleton as a pre-defined structure of nodes connected by edges, and store or transmit the 3D skeleton as a data structure including the structure of nodes connected by edges -> image with a collection of 3D coordinates of the nodes in a 3D coordinate system suggests 3D point cloud image; ¶0059, at least discloses it has been proposed to examine areas near landmark points on an image of a fish [a species of an object] and use feature extraction to build feature vectors that hopefully can be used to identify individual fish; ¶0085, at least discloses use feature extraction and/or pattern recognition based on known properties of the object searched for; Kitagawa- ¶0077, at least discloses The training data is obtained by extracting regions of the captured image [an extraction image] where respective feature parts of being the tip of head and the caudal fin are captured from a large number of captured images in which the fish of the type to be measured is captured).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Seyrek Pierre to incorporate the teachings of Kitagawa, and apply the extraction image into the Seyrek Pierre’s teachings in order to to generate the 3D point cloud image or a fin-retracted 3D point cloud image according to a species of an object and an extraction image corresponding to the object.
The same motivation that was utilized in the rejection of claim 1 applies equally to this claim.
Regarding claim 5, Seyrek Pierre in view of Kitagawa, discloses the catch monitoring device of claim 1, and further discloses wherein a model of a 3D point cloud unit of the catch monitoring device is trained using a training data group (Seyrek Pierre- ¶0005, at least discloses the processing of image data from one or more images to identify key points utilizes a machine learning algorithm that has been trained on annotated image data of similar aquatic animals [training data group]), wherein the training data group comprises a plurality of point cloud images (Seyrek Pierre- ¶0010, at least discloses a plurality of 3D skeleton data structures generated from a sequence of 2D images are obtained. Based on this sequence of 3D skeletons the change in pose for the aquatic animal over time can be analyzed in order to determine if any motion, pose, or behavior can be classified as abnormal [Wingdings font/0xE0] sequence of 3D skeletons suggests a plurality of point cloud images), which are created by rendering a rigged 3D object model using a 3D engine rendering platform (Seyrek Pierre- Fig. 6 shows a 3D fish model; ¶0054, at least discloses one or more edge computing modules, or as a cloud service, or as a combination of edge and cloud computing where some tasks are performed near the cameras and additional processing is performed in the cloud; ¶0056, at least discloses the processing modules 501, 502, 503 may be thought of as one computer system 601 consisting of several modules that may be implemented as a combination of software and hardware and located at one or several locations, including in the device 103, in the vicinity of the body of water 100 in which the device is submerged, and remotely for example in a remote control room, a server, or as a cloud service), wherein the plurality of point cloud images comprise an object (Seyrek Pierre- Fig. 6 and ¶0062, at least disclose a 3D skeleton representation 601. This skeleton 601 is representative of a fish 102, such as a salmon, and may have been generated from 2D images of the fish) or a part of the object of different viewing-angles, different swing angles, different sizes, different fin-retraction degrees, or different data point densities, respectively (Seyrek Pierre- ¶0046, at least discloses the objects from different angles […] he light sources 302 may be provided midway between two cameras such that an equal amount of light from two adjacent light sources illuminate an object as seen from the viewing angle of a particular camera 301; 0057, at least discloses an aquatic animal, in this case a fish, and a 3D skeleton which is generated from key points on or inside the fish […] 3D information about the aquatic animal may then be derived based on a comparison of how various details or features of the animal are positioned relative to each other in images captured from different angles or points of view [different viewing-angles]).
The method of claims 11-12 and 14-15 are similar in scope to the functions performed by the catch monitoring device of claims 1-2 and 4-5 and therefore claims 11-12 and 14-15 are rejected under the same rationale.
9. Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Seyrek Pierre in view of Kitagawa, further in view of Ikegami et al., (WO-2022/209435-A1, using EP4317910A1 as the translation of this document), referred herein as Ikegami
Regarding claim 3, Seyrek Pierre in view of Kitagawa, discloses the catch monitoring device of claim 1, and discloses the device further comprising:
an recognition unit (As discussed above), coupled to the 2D skeleton unit and the 3D skeleton unit (As discussed above), configured to recognize a species of an object in a corrected image, an extraction image corresponding to the object, and the 2D image (Seyrek Pierre- ¶0059, at least discloses it has been proposed to examine areas near landmark points on an image of a fish and use feature extraction to build feature vectors that hopefully can be used to identify individual fish; ¶0085, at least discloses use feature extraction and/or pattern recognition based on known properties of the object searched for; Kitagawa- ¶0077, at least discloses The training data is obtained by extracting regions of the captured image where respective feature parts of being the tip of head and the caudal fin are captured from a large number of captured images in which the fish of the type to be measured is captured; Fig. 9 and ¶0079, at least disclose when regions where the tip of head and the caudal fin are merely captured in no consideration of measurement points P as represented in FIG. 9 are extracted as training data, and reference data are generated based on the training data, the center of the reference data does not always represent a measurement point P; ¶0114, at least discloses image processing of correcting distortion of fish body due to fluctuation of water. Further, the information processing device 20 may perform image processing of correcting the captured image in consideration of a capturing condition such as a water depth, brightness, or the like of an object).
The prior art does not explicitly disclose, but Ikegami discloses
the 2D image is a contour image corresponding to the object (Ikegami- ¶0036, at least discloses a length between intersections of a straight line H perpendicular to the straight line L and the contour of the fish body can be estimated as a body depth; ¶0083, at least discloses The body depth measuring part 65 measures the body depth based on intersecting positions between the body depth auxiliary lines projected on the 2D plane and the contour of the fish body (for example, the contour on the segmentation image) […] The deficit of the contour etc. of the fish body can be determined, for example, by comparing a distance between the intersections of the body depth auxiliary lines and the contour of the fish body with a given threshold).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Seyrek Pierre/Kitagawa to incorporate the teachings of Ikegami, and apply the contour of the fish body into the Seyrek Pierre/Kitagawa’s teachings in order to recognize a species of an object in a corrected image, an extraction image corresponding to the object, and the 2D image, wherein the 2D image is a contour image corresponding to the object.
Doing so would estimate the size of an underwater life form.
The method of claim 13 is similar in scope to the functions performed by the catch monitoring device of claim 3 and therefore claim 13 is rejected under the same rationale.
10. Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Seyrek Pierre in view of Kitagawa, further in view of Jones et al., (“Jones”) [US-2021/0335039-A1]
Regarding claim 6, Seyrek Pierre in view of Kitagawa, discloses the catch monitoring device of claim 1, and further discloses wherein a 3D point cloud unit of the catch monitoring device is coupled to the 3D skeleton unit (see Claim 4 rejection for detailed analysis) and comprises:
a matching module, configured to select 3D point cloud image corresponding to the 3D point cloud image from a training data group key points utilizes a machine learning algorithm that has been trained on annotated image data of similar aquatic animals [training data group]; ¶0004, at least discloses For the nodes in the one or more 2D skeletons, estimated 3D positions are calculated, and from the estimated 3D positions of the nodes [3D point] of the one or more 2D skeletons, 3D coordinates of the nodes of 3D skeleton are determined and a 3D skeleton is generated [3D point cloud image] as a pre-defined structure of nodes connected by edges; ¶0012, at least discloses generating a three-dimensional skeleton representation of an aquatic animal, comprising a device with two open ends connected by a channel, and configured receive water from an aquatic environment through the channel, one or more cameras attached to the walls of the device and directed towards the interior of the device, at least one processor [a matching module] configured to receive image data from the one or more cameras and to process the image data), wherein the 3D point cloud image and the 3D point cloud image comprise an object or a part of the object of a same viewing-angle, a same swing angle, a same size, and different fin-retraction degrees, respectively (Seyrek Pierre- ¶0059, at least discloses it has been proposed to examine areas near landmark points on an image of a fish [an object] and use feature extraction to build feature vectors that hopefully can be used to identify individual fish).
The prior art does not explicitly disclose a fin-retracted 3D point cloud image corresponding to the 3D point cloud image based on a chamfer distance algorithm.
However, Jones discloses
a fin-retracted 3D point cloud image corresponding to the 3D point cloud image based on a chamfer distance algorithm (Jones- ¶0007, at least discloses comparing each generated shape space vector with a corresponding shape space vector for the corresponding target mesh where the corresponding shape space vector associated with the target mesh is encoded using the mesh encoder; determining a second value for a second error function based on a comparison of the generated shape space vector and the corresponding shape space vector; and updating one or more parameters of the image encoder based on the second value. The mapping is used to apply the texture to the 3D mesh of the object. The uv regressor is trained using a loss function that includes a descriptor loss based on a chamfer distance; ¶0045, at least discloses the network may implicitly learn geometrical correlations such that, e.g., fins for a fish in the template mesh align with fins for a fish in an input image. If the template mesh was animated by rigging, the template mesh may inherit that animation even after it has been deformed; ¶0111-0113, at least disclose The UV regressor may be trained using a loss function that includes a descriptor loss based on a Chamfer distance […] Descriptor loss: Let Qj be the set of N sampled points from the image for descriptor class j, and let Rj be the set of M sampled points from the mesh for descriptor class j. The loss is the sum over j of Chamfer_distance(Qj, Rj) for j in J, where J is the set of all descriptor classes, and where Chamfer distance is the sum over the distances to the nearest neighbor for each point in the mesh).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Seyrek Pierre/Kitagawa to incorporate the teachings of Jones, and apply the Chamfer distance into the Seyrek Pierre/Kitagawa’s teachings in order to select a fin-retracted 3D point cloud image corresponding to the 3D point cloud image from a training data group based on a chamfer distance algorithm, wherein the 3D point cloud image and the fin-retracted 3D point cloud image comprise an object or a part of the object of a same viewing-angle, a same swing angle, a same size, and different fin-retraction degrees, respectively.
Doing so would be usable to map a texture or to generate a 3D animation of the object.
The method of claim 16 is similar in scope to the functions performed by the catch monitoring device of claim 6 and therefore claim 16 is rejected under the same rationale.
11. Claims 7 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Seyrek Pierre in view of Kitagawa, further in view of Ikegami et al., (WO-2022/209435-A1, using EP4317910A1 as the translation of this document), referred herein as Ikegami, still further in view of Xu et al., (“Xu”) [US-2021/0209851-A1]
Regarding claim 7, Seyrek Pierre in view of Kitagawa, discloses the catch monitoring device of claim 1, and though the prior art discloses the 3D skeleton unit and the 3D skeleton (see Claim 1 rejection for detailed analysis); determine whether to trigger the catch monitoring device to output the catch length, the catch girth, or the catch weight (see Claim 1 rejection for detailed analysis) according to whether the distance is less than a threshold , the prior art discloses does not explicitly disclose, but Ikegami discloses
a 3D-to-2D unit configured to project the 3D skeleton into a coordinate system of the 2D image to form a skeleton projection (Ikegami- ¶0072, at least discloses The plane projecting part 52 projects the 3D position of the fork on the XZ plane […] The plane projecting part 52 records the position of the fork seen from the back side of the fish for every frame. Therefore, the position of the fork can be plotted on the XZ plane);
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Seyrek Pierre/Kitagawa to incorporate the teachings of Ikegami, and apply the projects the 3D position of the fork on the XZ plane into the Seyrek Pierre/Kitagawa’s teachings in order to project the 3D skeleton into a coordinate system of the 2D image to form a skeleton projection.
Doing so would estimate the size of an underwater life form.
The prior art discloses does not explicitly disclose, but Xu discloses
a curve comparison unit, coupled to the 2D skeleton unit and the 3D-to-2D unit, configured to calculate a distance between the 2D skeleton and the skeleton projection according to a Frechet distance algorithm (Xu- ¶0140, at least discloses the frechet distance value may be used to measure the similarity between the key point features. The smaller the frechet distance value between the two feature curves is, the more similar the shapes of the two feature curves are, i.e., the higher the similarity is, correspondingly, the greater the similarity between the partial face regions respectively corresponding to the two feature curves is; Figs. 21-22 and ¶0179, at least disclose the similarity determination unit 2211 may include: a curve fitting subunit 2201, configured to fit a feature curve representing the partial face region according to the key point coordinate combination of the partial face region; and a similarity determination subunit 2202, configured to determine the similarity between the key point feature of the partial face region and the corresponding reference key point feature in the reference model database according to a distance between the feature curve and a corresponding reference feature curve in the reference model database).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Seyrek Pierre/Kitagawa/Ikegami to incorporate the teachings of Xu, and apply the Frechet distance algorithm into the Seyrek Pierre/Kitagawa/Ikegami’s teachings in order to calculate a distance between the 2D skeleton and the skeleton projection according to a Frechet distance algorithm.
Doing so would improve the efficiency and accuracy of face kneading is a technology being studied by a person skilled in the art.
The method of claim 17 is similar in scope to the functions performed by the catch monitoring device of claim 7 and therefore claim 17 is rejected under the same rationale.
12. Claims 8 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Seyrek Pierre in view of Kitagawa, further in view of Xu et al., (“Xu”) [US-2021/0209851-A1]
Regarding claim 8, Seyrek Pierre in view of Kitagawa, discloses the catch monitoring device of claim 1, and further discloses wherein the 2D skeleton unit comprises:
a computational block (Seyrek Pierre- ¶0012, at least discloses at least one processor [computational block] configured to receive image data from the one or more cameras and to process the image data), configured to compute at least one 2D image based on the 2D image (Seyrek Pierre- Fig. 6 and ¶0062, at least disclose skeleton 601 is representative of a fish 102, such as a salmon, and may have been generated from 2D images of the fish), and the 2D image is a contour image corresponding to the object image (Seyrek Pierre- Fig. 6 shows the 2D image is a contour image corresponding to the object image of a salmon); and
a midline point calculation block (Seyrek Pierre- ¶0012, at least discloses at least one processor [midline point calculation block] configured to receive image data from the one or more cameras and to process the image data), coupled to the computational block, configured to select a plurality of midline points, which are closest to a fish mouth point or a tail fork point, from the at least one contour of the at least one 2D image (Seyrek Pierre- Fig. 6 shows key points 604, 605 and 601 [a plurality of midline points] which are closest to a fish mouth point 604 or node 612 is closest to 2 tail fork points from the at least one contour of the at least one 2D image of fish image; ¶0063, at least disclose Nodes 602 are associated with key points that are found at various locations on the fish 102 Such key points may include the snout 604, the eyes 605, the respective ends of the pectoral fins 606, the pelvic fins 607, and the anal fins 608, and at the root and upper and lower end of the caudal fin 609. Nodes 602 may also be located at the dorsal fin 610 and adipose fin 611), wherein the 2D skeleton comprises the fish mouth point, the tail fork point, and the plurality of midline points (Seyrek Pierre- Fig. 6 shows the 2D skeleton comprises the fish mouth point 604, the tail fork points, and the plurality of midline points 603, 612; Kitagawa- Fig. 15 show fishes with the plurality of midline points).
The prior art discloses does not explicitly disclose, but Xu discloses
a minimum distance between a contour of the 2D image and at least one contour of the at least one 2D image is equal to at least one minimum distance between the at least one contour of the at least one 2D image (Xu- Fig. 7 shows a minimum distance between an inside contour at point 63 with outside contour at point 52 is equal to at least one minimum distance between the at least one inside contour of the at least one 2D image).
It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Seyrek Pierre/Kitagawa to incorporate the teachings of Xu, and apply the contour of the at least one 2D image into the Seyrek Pierre/Kitagawa’s teachings in order a minimum distance between a contour of the 2D image and at least one contour of the at least one 2D image is equal to at least one minimum distance between the at least one contour of the at least one 2D image, and the 2D image is a contour image corresponding to the object.
Doing so would improve the efficiency and accuracy of face kneading is a technology being studied by a person skilled in the art.
The method of claim 18 is similar in scope to the functions performed by the catch monitoring device of claim 8 and therefore claim 18 is rejected under the same rationale.
Allowable Subject Matter
13. Claims 9-10 and 19-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
14. The following is a statement of reasons for the indication of allowable subject matter:
Regarding Claim 9, the combination of prior arts teaches the method of Claim 1.
However in the context of claims 1 and 9 as a whole, the combination of prior arts does
not teach reduce dimensionality of the 2D image to an origin and a major axis, wherein the 2D image is a contour image corresponding to the object; calculate at least one convex hull point of the 2D image, wherein at least one line between the at least one convex hull point encloses a contour of the 2D image, and the at least one convex hull point lies on the contour; calculate at least one defect point of the 2D image, wherein the at least one defect point on the contour is farthest from the at least one line; and select a tail fork point, which is farthest from the origin along the major axis, from the at least one defect point, and select a fish mouth point, which is farthest from the origin and the tail fork point along the major axis, from the at least one convex hull point.
Therefore, Claim 9 in the context of claim 1 as a whole does comprise allowable subject matter.
Regarding Claim 10, the combination of prior arts teaches the method of Claim 1.
However in the context of claims 1 and 10 as a whole, the combination of prior arts does
not teach decompose the 3D point cloud image into a plurality of components, calculate a center of each slice of each of the plurality of components to extract a plurality of first component skeletons of the plurality of components, and calculate a plurality of second component skeletons connected to each other according to the plurality of first component skeletons; select a fish body skeleton with a longest length from the plurality of second component skeletons; define an endpoint of the fish body skeleton as an intersection point, wherein the endpoint of the fish body skeleton overlaps at least one of the plurality of second component skeletons except the fish body skeleton of the plurality of second component skeletons; determining block and the comparison block, configured to calculate a fish mouth point according to the intersection point, the fish body skeleton, and the 3D point cloud image; select at least one fish tail skeleton from the plurality of second component skeletons according to the intersection point, and define at least one endpoint of the at least one fish tail skeleton, which is different from the intersection point, as at least one critical point; and determining block and the comparison block, configured to calculate at least one extended fish tail point according to the at least one fish tail skeleton, the at least one critical point, and the 3D point cloud image, calculate a plane according to the at least one extended fish tail point and the intersection point, calculate an intersection point set according to the plane and the 3D point cloud image, and select a tail fork point, which is closest to the intersection point, from the intersection point set, wherein the 3D skeleton comprises the fish mouth point, the tail fork point, and the fish body skeleton.
Therefore, Claim 10 in the context of claim 1 as a whole does comprise allowable subject matter.
Regarding Claim 19, the combination of prior arts teaches the method of Claim 11.
However in the context of claims 11 and 19 as a whole, the combination of prior arts does not teach reducing dimensionality of the 2D image to an origin and a major axis, wherein the 2D image is a contour image corresponding to the object; calculating at least one convex hull point of the 2D image, wherein at least one line between the at least one convex hull point encloses a contour of the 2D image, and the at least one convex hull point lies on the contour; calculating at least one defect point of the 2D image, wherein the at least one defect point on the contour is farthest from the at least one line; and selecting a tail fork point, which is farthest from the origin along the major axis, from the at least one defect point, and selecting a fish mouth point, which is farthest from the origin and the tail fork point along the major axis, from the at least one convex hull point.
Therefore, Claim 19 in the context of claim 11 as a whole does comprise allowable subject matter.
Regarding Claim 20, the combination of prior arts teaches the method of Claim 11.
However in the context of claims 11 and 20 as a whole, the combination of prior arts does
not teach decomposing the 3D point cloud image into a plurality of components, calculate a center of each slice of each of the plurality of components to extract a plurality of first component skeletons of the plurality of components, and calculate a plurality of second
component skeletons connected to each other according to the plurality of first component skeletons; selecting a fish body skeleton with a longest length from the plurality of second component skeletons; defining an endpoint of the fish body skeleton as an intersection point, wherein the endpoint of the fish body skeleton overlaps at least one of the plurality of second component skeletons except the fish body skeleton of the plurality of second component skeletons; calculating a fish mouth point according to the intersection point, the fish body skeleton, and the 3D point cloud image; selecting at least one fish tail skeleton from the plurality of second component skeletons according to the intersection point, and define at least one endpoint of the at least one fish tail skeleton, which is different from the intersection point, as at least one critical point; and calculating at least one extended fish tail point according to the at least one fish tail skeleton, the at least one critical point, and the 3D point cloud image, calculate a plane according to the at least one extended fish tail point and the intersection point, calculate an intersection point set according to the plane and the 3D point cloud image, and select a tail fork point, which is closest to the intersection point, from the intersection point set, wherein the 3D skeleton comprises the fish mouth point, the tail fork point, and the fish body skeleton.
Therefore, Claim 20 in the context of claim 11 as a whole does comprise allowable subject matter.
Conclusion
15. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. They are as recited in the attached PTO-892 form.
15. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL LE whose telephone number is (571)272-5330. The examiner can normally be reached 9am-5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MICHAEL LE/Primary Examiner, Art Unit 2614