DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application is a National Stage application of PCT PCT/EP2021/074670. Receipt is acknowledged that application claims priority to foreign application with application number EP20196056.4 dated 09/14/2020. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Response to Amendment
The amendment filed 11/12/2025 has been entered. Applicant’s amendments to the specification and claims have overcome each and every objection and 35 U.S.C. 101 rejection previously set forth in the Non-Final Office Action mailed 08/12/2025. Claims 1-7 and 9-16 remain pending in the application, with claim 8 having been cancelled.
Response to Arguments
Applicant’s arguments have been considered but are moot because the new ground of rejection does not rely on any combination of references applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Objections
Claim 9 is objected to because of the following informalities: “whether the respective position information associated to said each pixel fulfils the first condition, wherein the first of pixels comprises each pixel of the third set of pixels” should read “whether the respective position information associated to said each pixel of the third set of pixels fulfils the first condition, wherein the first set of pixels comprises each pixel of the third set of pixels”.
Claim 13 is objected to because of the following informalities: “a work deck for positioning a labware item” should read “the work deck for positioning the first labware item” (claim 13 is dependent on claim 1 which introduces the work deck and labware item). For similar reasons to claim 13, claim 16 is objected to because of the following informalities: “a labware item” should read “the first labware item”.
Appropriate correction is required.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
Regarding claims 12 and 13, the broadest reasonable interpretation for “processing means” will be defined as the following corresponding structure and equivalents thereof. The corresponding structure is a processor or similar substitute of the data processing system (claim 12) or automated laboratory system (claim 13) that performs the claimed method by executing computer algorithm instructions, as disclosed in paragraph 3 on pg. 16 (data processing system) and paragraph 2 on pg. 32 (automated laboratory system) in the specification.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1, 11, and 16 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Claim 1 recites the limitation "the focal plane" in the last line of the claim. There is insufficient antecedent basis for this limitation in the claim. For examination purposes, the focal plane will be interpreted to be the focal plane of the camera introduced in the beginning of the claim.
Claim 11 recites the limitation “a work deck of an automated laboratory system” and “a region of the work deck”. It is unclear whether the work deck of the automated laboratory system is the same work deck introduced in claim 1 and if the aforementioned “region of the work deck” is the same as the “first portion of a work deck” introduced in claim 1. Claim 16 similarly recites “a work deck of an automated laboratory system”, and is thus unclear for the same reasons. For examination purposes, the work decks of claims 11 and 16 will refer to that of claim 1.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6 and 10-16 are rejected under 35 U.S.C. 103 as being unpatentable over Smith et al. (U.S. Patent No. 2018/0183986 A1), hereinafter Smith, in view of Eckard et al. (U.S. Patent No. 2014/0036070 A1), hereinafter Eckard.
Regarding claim 1, Smith teaches a computer implemented method (Smith, para 106: “The process 1600 is illustrated as a logical flow diagram, the operation of which represent a sequence of operations that may be implemented in hardware, computer instructions, or a combination thereof”) for determining a first value of at least a camera parameter of a camera (Smith, para 54: “This disclosure generally relates to determining an exposure setting for a content capture device. The exposure setting may relate to an amount of light a sensor of a content capture device receives when content (e.g., an image or a video) is captured.”; para 105: “the process 1600 may be performed by a computing device (e.g., a content capture device such as a camera)”), the method comprising at least the steps of:
accessing a first image captured by the camera (Smith, para 61: “The process 100 may include receiving an image (110). In some examples, the image may be received from a sensor of the content capture device”; Step 110 in Fig. 1A), wherein the first image displays a scene and comprises a first set of pixels (Smith, image comprises a scene, see image 720 in Fig. 7; para 59: “an object in an image of a scene”; pixels that make up the image, para 57: “pixels of an image from the content capture device”), and wherein the scene comprises a first scene region (Smith, para 63: “first object”), the first scene region being displayed in a first image portion of the first image (Smith, portion of the image corresponding to the first object, para 63: “The process 100 may further include dividing the image into pixel groups… the pixel groups may be associated with each object (e.g., a first object may be a first pixel group”; see first object 422 in Fig. 4);
generating, for each pixel of the first set of pixels, a respective first data item associated to said each pixel of the first set of pixels (Smith, pixel group, para 63), wherein said respective first data item comprises respective position information associated to said each pixel of the first set of pixels, said respective position information being indicative of the position in the first image of said each pixel of the first set of pixels with respect to the location of the first image portion in the first image (Smith, because the image is divided, pixel group defines where the pixel is in the image with respect to other groups - the pixel is in the pixel group of the first object or not; see different groups in Fig. 4); and
determining the first value of the camera parameter by using at least a first plurality of pixels of the first set of pixels (Smith, first object pixels, para 211: “the first AEC instance 1230 may determine one or more first settings 1232 (e.g., an exposure setting, a gain setting, or any combination thereof) for one or more first objects from the first image. The one or more first settings 1232 may be sent to the content capture device 1210 to be used for a future image”), wherein, for each pixel of the first plurality of pixels, the respective position information associated to said each pixel of the first plurality of pixels fulfils a first condition (Smith, if the pixel is in the pixel group of the first object or not), wherein the step of generating, for each pixel of the first set of pixels, the respective first data item associated to said each pixel of the first set of pixels is carried out by using information indicative of a shape of the first image portion (Smith, pixel group may be defined based on shape, para 63: “A size and shape of each pixel group may be predefined. In some examples, the size and shape of each pixel group may be the same or vary. For illustration purposes, the pixel groups will be described as rectangles. However, it should be recognized that the pixel groups may be of any shape that divides the image into a plurality of portions. For example, the pixel groups may be radial from a center of the image. In such an example, each pixel group may include a different range of diameters”);
the method further comprising the step of: acquiring the information indicative of the shape of the first image portion by using at least the first image and a shape determining algorithm for obtaining the shape of the first image portion in the first image (Smith, object recognition system, para 87: “the object in the image may be identified by an object recognition system…The object recognition system may also determine pixel groups that include the object”; para 95: “Referring back to FIG. 3, an area of the image may be identified that includes an object 322”; object recognition is used to determine where objects are located, thus defining their shape in the image; see para 63 citation above where the pixel groups may be associated with each object).
Smith teaches wherein the information indicative of the shape of the first image portion comprises information indicative of the border (Smith, identifying the area of the object indicates the border of the object in the image; the shape of the pixel group for each object also indicates the border of the object, see para 87 and 95 citations above), as seen from the point of view of the focal plane (Smith, image is seen from the point of view of the camera focal plane, parallel to the imaged objects). However, Smith fails to teach a labware item and work deck, and thus fails to teach explicitly wherein, if the first scene region comprises a first labware item and a first portion of a work deck, the information indicative of the shape of the first image portion comprises information indicative of the border of the first labware item, as seen from the point of view of the focal plane.
However, Eckard teaches images of a first scene region (Eckard, para 29: “The detection method according to the invention comprises the detection of an original arrangement of laboratory articles 1 on the work area 2 of the laboratory work station 3 by means of at least one reference digital image 6, where the reference digital images 6 are recorded with at least one digital camera 5 fastened to the support apparatus 4 in a defined position and alignment”; See Figure 1), wherein the first scene region comprises a first labware item (Eckard, laboratory articles, para 29: “The detection method according to the invention comprises the detection of an original arrangement of laboratory articles 1 on the work area 2 of the laboratory work station 3 by means of at least one reference digital image 6, where the reference digital images 6 are recorded with at least one digital camera 5 fastened to the support apparatus 4 in a defined position and alignment”) and a first portion of a work deck (Eckard, work area, para 12: “FIG. 1 shows a laboratory work station used for carrying out the method according to the invention comprising at least one work area and a robot arm used as a support apparatus on which one or two digital cameras are fastened”).
Eckard teaches an automated laboratory system wherein image exposure settings are considered (Eckard, para 48: “Preferably and optionally the exposure of the recorded reference digital images 6 or the complete reference image 15 is checked automatically”). It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the automated laboratory system of Eckard with the method of Smith in order to operate an automated laboratory system based on images of objects with proper exposure settings (Eckard, para 2: “The invention relates to a method and a corresponding apparatus for detecting or checking an arrangement of laboratory articles on a work area of a laboratory work station…These laboratory work stations preferably comprise a motorized robot or robot arm which can be fitted with grippers for gripping the laboratory article and/or with pipettes for receiving and delivering liquid samples”; para 34: “Preferred are digital cameras which are configured for automatic regulation of focus, white balance, exposure and contrast”; para 89: “If only slight deviations could be determined or if unknown objects (such as tools, note blocks, gloves etc.) not present when detecting the original arrangement of laboratory articles 1 are located on the work area 2, a corresponding alarm is given to the user”). In the combination of Smith in view of Eckard, the information indicative of the shape of the first image portion would comprise information indicative of the border of the objects in the image, taught by Smith, including when the object is a first labware item, taught by Eckard.
Regarding claim 2 (dependent on claim 1), Smith in view of Eckard teaches wherein, for each pixel of the first set of pixels, the respective position information associated to said each pixel of the first set of pixels fulfils the first condition if said respective position information specifies that said each pixel of the first set of pixels is arranged in the first image outside the first image portion, or wherein, for each pixel of the first set of pixels, the respective position information associated to said each pixel of the first set of pixels fulfils the first condition if said respective position information specifies that the first image portion comprises said each pixel of the first set of pixels (Smith, pixel group specifies if the pixel is in the pixel group of the first object or not; See Figure 4 where different objects are in distinct groups).
Regarding claim 3 (dependent on claim 1), Smith in view of Eckard teaches wherein, for each pixel of the first set of pixels, the respective position information associated to said each pixel of the first set of pixels specifies whether the first image portion comprises said each pixel of the first set of pixels (Smith, pixel group specifies if the pixel is in the pixel group of the first object or not; See Figure 4 where different objects are in distinct groups).
Regarding claim 4 (dependent on claim 1), Smith in view of Eckard teaches further comprising the step of: acquiring information indicative of the location of the first image portion in the first image by using the first image (Smith, image is divided into portions and the pixel groups are defined, see step 120 in Fig. 1A and para 63; see also first object 422 defined in Fig. 4).
Regarding claim 5 (dependent on claim 4), Smith in view of Eckard teaches wherein the step of acquiring the information indicative of the location of the first image portion in the first region is carried out by using: a locating algorithm for determining the location of the first image portion in the first image, information indicative of the position of the scene and of the camera with respect to one another, a set of extrinsic calibration parameters associated with the camera, a set of intrinsic calibration parameters associated with the camera, information indicative of the shape of the first scene region, and/or information indicative of the location of the first scene region in the scene (Smith, location algorithm, para 87: “the object in the image may be identified by an object recognition system”).
Regarding claim 6 (dependent on claim 1), Smith in view of Eckard teaches further comprising the steps of:
acquiring a second image by using at least the first value of the camera parameter (Smith, para 211: “the first AEC instance 1230 may determine one or more first settings 1232 (e.g., an exposure setting, a gain setting, or any combination thereof) for one or more first objects from the first image. The one or more first settings 1232 may be sent to the content capture device 1210 to be used for a future image. In some examples, the future image may be the next image”);
acquiring a third image by using at least a second value of the camera parameter (Smith, para 212: “the second AEC instance 1240 may determine one or more second settings 1242 (e.g., an exposure setting, a gain setting, or any combination thereof) for one or more second objects from the second image. In such examples, the second image may be captured before or after the first image. The one or more second settings 1242 may be sent to the content capture device 1210 to be used for a future image, similar to as described above for the first AEC instance”); and
constructing a fourth image by using the second image, the third image and a High-dynamic-range algorithm (Smith, para 205-206: “the image stitching process 1200 may combine images that were captured based upon adjustments from each of the multiple instances of AEC…a content capture device 1210 may be configured in a high dynamic range (HDR) mode. The HDR mode may facilitate capturing multiple images of a similar scene and stitching the multiple images together”; See Fig. 12A wherein the images captured with the updated settings are fed to the image stitcher 1202 to output image 1204).
Regarding claim 10 (dependent on claim 1), Smith in view of Eckard teaches wherein the camera parameter is selected from the group consisting of the gain, ISO, exposure value, exposure time, aperture size, an entry of a white balance matrix, focusing distance, brightness, contrast, f-stop, and resolution (Smith, para 211: “an exposure setting, a gain setting, or any combination thereof”).
Regarding claim 11 (dependent on claim 1), Smith in view of Eckard teaches wherein the scene comprises a work deck of an automated laboratory system (Eckard, work area of the laboratory work station, see para 12 of Eckard and the combination in claim 1) and the first scene region comprises a region of the work deck (Eckard, see different regions of the work area in Figures 1-3).
Regarding claim 12, Smith in view of Eckard teaches a data processing system (Smith, computer system, see citation below) comprising processing means configured to perform the method according to claim 1 (Smith, processor, para 238: “process 1400 may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof”).
Regarding claim 13, Smith in view of Eckard teaches an automated laboratory system (Taught by Smith in view of Eckard, see para 12 of Eckard and the combination in claim 1) comprising a camera (Smith, para 105: “the process 1600 may be performed by a computing device (e.g., a content capture device such as a camera)”), processing means configured to perform the method according to claim 1 (Smith, processor, para 238: “process 1400 may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof”), and the work deck (Eckard, work area in FIG. 1) for positioning the first labware item (Eckard, laboratory articles, see para 29 and para 11 cited in claim 1).
Regarding claim 14, Smith in view of Eckard teaches at least one non-transitory computer readable storage medium comprising instructions which, when the instructions are executed, cause one or more processors to carry out the method according to claim 1 (Smith, para 107: “the code may be stored on a machine-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors. The machine-readable storage medium may be non-transitory.”).
Regarding claim 15, Smith in view of Eckard teaches a computer-readable storage medium comprising instructions which, when executed by a computer, cause said computer to carry out the method according to claim 1 (Smith, para 238: “process 1400 may be performed under the control of one or more computer systems configured with executable instructions and may be implemented as code (e.g., executable instructions, one or more computer programs, or one or more applications) executing collectively on one or more processors, by hardware, or combinations thereof. As noted above, the code may be stored on a machine-readable storage medium, for example, in the form of a computer program comprising a plurality of instructions executable by one or more processors”).
Regarding claim 16 (dependent on claim 1), Smith in view of Eckard teaches wherein the scene comprises a work deck of an automated laboratory system (Eckard, work area of the laboratory work station, see para 12 of Eckard and the combination in claim 1) and the first scene region comprises at least a portion of the first labware item (Eckard, laboratory articles, see para 29 and para 11 cited in claim 1).
Claims 7 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Smith in view of Eckard, in further view of Velarde et al. (U.S. Patent No. 8,947,555 B2), hereinafter Velarde.
Regarding claim 7 (dependent on claim 6), Smith in view of Eckard teaches wherein the scene comprises a second scene region, the second scene region being displayed in a second image portion of the first image (Smith, region with the second object, para 63: “For another example, the pixel groups may be associated with each object (e.g., a first object may be a first pixel group, a second object may be a second pixel group, and the rest of the image may be a third pixel group)”), wherein the first image comprises a second set of pixels (Smith, pixels that make up the image, para 57: “pixels of an image from the content capture device”; See paragraph 1 on pg. 25 of the Specification of the claimed invention wherein “the second set of pixels may be equal to the first set of pixels”) and the method further comprises the steps of:
determining the second value of the camera parameter by using at least a second plurality of pixels of the second set of pixels (Smith, second object pixels, para 212: “the second AEC instance 1240 may determine one or more second settings 1242 (e.g., an exposure setting, a gain setting, or any combination thereof) for one or more second objects from the second image”), wherein, for each pixel of the second plurality of pixels, the respective position information associated to said each pixel of the second plurality of pixels fulfils a second condition (Smith, if the pixel is in the pixel group of the second object or not).
However, Smith fails to explicitly teach a second data item, and therefore fails to teach generating, for each pixel of the second set of pixels, a respective second data item associated to said each pixel of the second set of pixels, wherein said respective second data item comprises respective position information associated to said each pixel of the second set of pixels, said respective position information being indicative of the position in the first image of said each pixel of the second set of pixels with respect to the location of the second image portion in the first image (Smith teaches the pixel group, introduced in claim 1).
Verlarde discloses a method for generating high dynamic range images (Verlarde, abstract). Verlarde teaches generating, for each pixel of a second set of pixels, a respective second data item (Verlarde, two data items – distance to bright region and distance to dark region, recited in Claim 1: “filtering first pixels in the one or more bright regions based on each pixel's distance to a set of reference points… filtering second pixels in the one or more dark regions based on each pixel's distance to a set of reference points”; col 14, ln 48-54: “Once the distance to each reference point is calculated, process 1000 moves to step 1025, where the shortest distance to a reference point is compared to a threshold value. If the closest reference point is further than the threshold value, the point is considered an outlier, and process 1000 moves from decision step 1025 to step 1060 where the data point is discarded”; see also Fig. 4A-4C) associated to said each pixel of the second set of pixels, wherein said respective second data item comprises respective position information associated to said each pixel of the second set of pixels, said respective position information being indicative of the position in the first image of said each pixel of the second set of pixels with respect to the location of the second image portion in the first image (Verlarde, pixel distance to reference points).
It would have been obvious to a person having ordinary skill in the art, before the effective filing date of the claimed invention, to have combined the method of utilizing a second data item of Verlarde with the method of Smith in view of Eckard in order to utilize each pixel’s position relative to each object/region to determine its grouping (Verlarde, col 14, ln 54-61: “A data point that is far from each of the reference points is most likely not a gray object. Such observations are discarded in one embodiment as they can have a negative impact on the overall distance comparison. If at least one reference point is within the threshold distance however, process 1000 moves to step 1027, where the data point is saved for later use in step 1040 when the composite point is created”).
Regarding claim 9 (dependent on claim 7), Smith in view of Eckard and Verlarde teaches wherein the step of determining the first value of the camera parameter by using at least the first set of pixels comprises: selecting the first plurality of pixels at least by checking, for at least each pixel of a third set of pixels, whether the respective position information associated to said each pixel of the third set of pixels fulfils the first condition, wherein the first set of pixels comprises each pixel of the third set of pixels (Smith, camera parameter is set based on pixels fulfilling a first condition – being part of the first object, para 211: “the first AEC instance 1230 may determine one or more first settings 1232 (e.g., an exposure setting, a gain setting, or any combination thereof) for one or more first objects from the first image”; para 87: “The object recognition system may also determine pixel groups that include the object”; see FIG. 4 where there are at least three groups of pixel for three objects in the image).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Davis et al. (U.S. Patent No. 2022/0276274 A1) teaches a laboratory system wherein labware is recognized (Davis, abstract: “an imaging device configured to monitor the deck of the laboratory workstation by creating one or more images of the deck; and a processor configured to recognize, in the one or more images created by the imaging device, an item of labware loaded onto the deck by an operator”). FIG 6, attached below, shows different objects labeled with borders indicated by a bounding box.
PNG
media_image1.png
661
833
media_image1.png
Greyscale
Lee et al. (cited in Non-Final Rejection 08/12/2025 - U.S. Patent No. 2006/0165288 A1) teaches a similar system/method (abstract: “The current image is captured using a local exposure value determined from luminance readings of pixels within an object region of an object-of-interest of a previous image.”).
Fan (cited in Non-Final Rejection 08/12/2025 - U.S. Patent No. 2016/0286114 A1) teaches a similar system/method (abstract: “A method, a device and a computer readable for automatically identifying a Christmas tree scene and setting a camera's focus and/or exposure parameters in a way that yields images with high image quality”).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to EMMA E DRYDEN whose telephone number is (571)272-1179. The examiner can normally be reached M-F 9-5 EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANDREW BEE can be reached at (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/EMMA E DRYDEN/Examiner, Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677