DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
35 USC § 101 Statutory Analysis
The claims do not recite any of the judicial exceptions enumerated in the 2019 Revised Patent Subject Matter Eligibility Guidance. Further, the claims do not recite any method of organizing human activity, such as a fundamental economic concept or managing interactions between people. Finally, the claims do not recite a mathematical relationship, formula, or calculation. Thus, the claims are eligible because they do not recite a judicial exception.
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f), is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f):
A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitations use a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are: “the image input unit” is configured to “input a to-be-detected fish school image into a fish school detection model” in claim 13; “the feature extraction unit” is configured to “extract feature information of the to- be-detected fish school image based on the feature extraction layer and determine a fish school feature map and an attention feature map” in claim 13; “the feature fusion unit” is configured to “fuse, based on the feature fusion layer, the fish school feature map and the attention feature map to determine a target fusion feature map” in claim 13; and “the feature recognition unit” is configured to “determine a target fish school detection result based on the feature recognition layer and the target fusion feature map” in claim 13.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f), they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recite sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f).
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. §102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 5-7 and 13-15 are rejected under 35 U.S.C. §102(a)(1) as being anticipated by Han et al. (CN-114694-A) (hereafter referred to as “Han”).
The examiner would like to point out that the various “units” identified in section 6 hereinabove are being interpreted under 35 U.S.C. 112(f) as described in FIG. 5.
FIG. 5 is a schematic diagram showing the hardware configuration of the electronic device. The above-mentioned configuration of the electronic device is a functional configuration achieved by cooperation of the hardware configuration shown in FIG. 5 and a program. As shown in FIG. 5, the electronic device includes a processor 501, a memory 503 and a communication interface 502, which are connected to each other by a communication bus 504, as a hardware configuration. The processor 501 controls another configuration in accordance with a program stored in the memory 503, performs data processing in accordance with the program, and stores the processing result in the memory 503. The processor can be a microprocessor. The memory 503 stores a program executed by the processor 501 and data. The memory 503 can be a ROM (Read Only Memory).
With regard to claim 1, Han describes inputting a to-be-detected fish school image into a fish school detection model (see Figures 1, 2, 3, 4 and 5(f) and refer for example to page 11, first full paragraph, Figure 5(f) in vertical fashion shows the image as it propagates through the system); wherein the fish school detection model comprises a feature extraction layer, a feature fusion layer and a feature recognition layer (see Figure 2 and refer for example to page 13, line 6 through page 15, line 2, which discuss a feature extraction module, a dense strategy module, a characteristic module, a depth sparable convolution module and a target detecting module, and also see Figure 3 and refer for example page 19, line 7 through page 20, line 27 which discuss the various layers corresponding to applicant’s “feature extraction layer, a feature fusion layer and a feature recognition layer”); extracting feature information of the to-be-detected fish school image based on the feature extraction layer (refer for example to page 13, lines 26-27 and to page 15, lines 6-8), and determining a fish school feature map and an attention feature map based on an attention mechanism (refer for example to page 15, lines 9-21); fusing, based on the feature fusion layer, the fish school feature map and the attention feature map to determine a target fusion feature map (refer for example to page 15, lines 22-32, to page 17, lines 1-28 and to page 20, line 22 through page 21, line 22); and determining a target fish school detection result based on the feature recognition layer and the target fusion feature map (refer for example to page 16, lines 1-2 and to page 22, lines 20-21).
As to claim 5, Han describes wherein the determining a target fish school detection result based on the feature recognition layer and the target fusion feature map, comprises determining types and amounts of target fish in the to-be-detected fish school image based on the feature recognition layer and the target fusion feature map, and determining the target fish school detection result by deleting duplicate detection values based on a non-maximum suppression algorithm (see refer Figure 5(f) and refer for example to page lines 10, lines 27-33, to page 23, lines 12 through page 24, line 8, and to page 24, line 10 through page 31).
In regard to claim 6, Han describes before the inputting a to-be-detected fish school image into the fish school detection model, comprising determining a network structure of the fish school detection model, wherein the determining a network structure of the fish school detection model, comprises embedding a coordinate attention module and a convolutional block attention module sequentially in a backbone feature extraction network based on a you only look once (YOLOv5s) algorithm network structure (see refer Figure 3 and for example to page 11, first full paragraph, page 16, lines 22-34, and to page 20, lines 1-13).
As to claim 7, Han describes training the fish school detection mode, wherein the training the fish school detection model, comprises determining a sample fish school image set by obtaining a plurality of sample fish school images and creating labels, training the fish school detection model based on the sample fish school image set, and updating network parameters of the fish school detection model based on a target loss function and a cosine annealing method, and iteratively training the fish school detection model based on the updated network parameters until the fish school detection model converges (see refer Figure 5(f) and refer for example to page lines 10, lines 27-33, to page 16, lines 5-21, to page 19, lines 19-35, to page 23, line 12 through page 24, line 8, and to page 24, line 10 through page 31).
With regard to claim 13, Han describes an image input unit, a feature extraction unit, a feature fusion unit and a feature recognition unit (refer to the paragraph bridging pages 22 and 23, as well as to the paragraph bridging pages 23 and 24); wherein the image input unit is configured to input a to-be-detected fish school image into a fish school detection model (see Figures 1, 2, 3, 4 and 5(f) and refer for example to page 11, first full paragraph, Figure 5(f) in vertical fashion shows the image as it propagates through the system); and the fish school detection model comprises a feature extraction layer, a feature fusion layer and a feature recognition layer (see Figure 2 and refer for example to page 13, line 6 through page 15, line 2, which discuss a feature extraction module, a dense strategy module, a characteristic module, a depth sparable convolution module and a target detecting module, and also see Figure 3 and refer for example page 19, line 7 through page 20, line 27 which discuss the various layers corresponding to applicant’s “feature extraction layer, a feature fusion layer and a feature recognition layer”); wherein the feature extraction unit (refer to the paragraph bridging pages 22 and 23, as well as to the paragraph bridging pages 23 and 24) is configured to extract feature information of the to-be-detected fish school image based on the feature extraction layer (refer for example to page 13, lines 26-27 and to page 15, lines 6-8) and determine a fish school feature map and an attention feature map (refer for example to page 15, lines 9-21); wherein the feature fusion unit (refer to the paragraph bridging pages 22 and 23, as well as to the paragraph bridging pages 23 and 24) is configured to fuse, based on the feature fusion layer, the fish school feature map and the attention feature map to determine a target fusion feature map (refer for example to page 15, lines 22-32, to page 17, lines 1-28 and to page 20, line 22 through page 21, line 22); and wherein the feature recognition unit (refer to the paragraph bridging pages 22 and 23, as well as to the paragraph bridging pages 23 and 24) is configured to determine a target fish school detection result based on the feature recognition layer and the target fusion feature map (refer for example to page 16, lines 1-2 and to page 22, lines 20-21).
As to claim 14, Han describes a processor, a memory and a communication bus, wherein the processor and the memory communicate with each other through the communication bus, the memory stores program instructions executable by the processor, and the processor is configured to call the program instructions to implement the fish school detection method as claimed in claim 1 (refer to the paragraph bridging pages 22 and 23, as well as to the paragraph bridging pages 23 and 24).
In regard to claim 15, Han describes a non-transitory computer-readable storage medium, storing a computer program thereon, wherein the computer program is configured to be executed by a processor to implement the fish school detection method as claimed in claim 1 (refer to the paragraph bridging pages 22 and 23, as well as to the paragraph bridging pages 23 and 24).
Allowable Subject Matter
Claims 2-4, and 8-12 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Relevant Prior Art
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Terada, Babazaki, Dong, Kong, Li, Wang S.H., Wang M., Salman, Yang, Suo, Wang G., Chang and Khai all disclose systems similar to applicant’s claimed invention.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jose L. Couso whose telephone number is (571) 272-7388. The examiner can normally be reached on Monday through Friday from 5:30am to 1:30pm.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Bella, can be reached on 571-272-7778. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300.
Information regarding the status of an application may be obtained from the Patent Center information webpage on the USPTO website. For more information about the Patent Center, see https://www.uspto.gov/patents/apply/patent-center. Should you have questions about access to the Patent Center, contact the Patent Electronic Business Center (EBC) at 571-272-4100 or via email at: ebc@uspto.gov .
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
/JOSE L COUSO/Primary Examiner, Art Unit 2667
November 7, 2025