Prosecution Insights
Last updated: April 19, 2026
Application No. 18/396,862

ENVIRONMENTAL MONITORING DEVICE USING SMART PHONE

Non-Final OA §103§112
Filed
Dec 27, 2023
Examiner
SINGER, DAVID L
Art Unit
2855
Tech Center
2800 — Semiconductors & Electrical Systems
Assignee
Deepvisions Co. Ltd.
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
281 granted / 415 resolved
At TC average
Strong +44% interview lift
Without
With
+43.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
31 currently pending
Career history
446
Total Applications
across all art units

Statute-Specific Performance

§101
4.2%
-35.8% vs TC avg
§103
50.8%
+10.8% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
25.2%
-14.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Information Disclosure Statement The information disclosure statement(s) (IDS) submitted on 12/27/2023 is/are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement(s) is/are being considered by the Examiner. The information disclosure statement filed 06/04/2025 fails to comply with the provisions of 37 CFR 1.98(a)(4) because it lacks the appropriate size fee assertion. Every IDS filed under 37 CFR 1.97 on or after 19JAN2025 requires a written statement called an IDS fee assertion. The statement must be clear and indicate (1) that the IDS is accompanied by the appropriate IDS size fee or (2) that no IDS size fee is required. Therefore, the IDS has been placed in the application file, but the information referred to therein has not been considered as to the merits, EXCEPT where Examiner has cited the references to the PTO-892. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. MPEP § 2173.02(I) states in part: “if the language of a claim, given its broadest reasonable interpretation, is such that a person of ordinary skill in the relevant art would read it with more than one reasonable interpretation, then a rejection under 35 U.S.C. 112(b) or pre-AIA 35 U.S.C. 112, second paragraph is appropriate”. Claim(s) 7-8 and 11-13 is/are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 7 and claim 8, the limitation “one or more of environment information and climate information” given the broadest reasonable interpretation may be read with more than one reasonable interpretation including “one or more of each of environment information and climate information” or alternatively “one or more of information selected from the group consisting of environment information and climate information”. To the best understanding of the Examiner and for the purpose of examination the Examiner’s interpretation will be in accordance to the latter/broader aforementioned reasonable claim interpretation. Regarding claim 11 and substantially similarly claim 12, the limitation (language of claim 11) “one or more of a photographed image, a converted image, converted data, and a fine dust concentration measurement value” given the broadest reasonable interpretation may be read with more than one reasonable interpretation including “one or more of each of a photographed image, a converted image, converted data, and a fine dust concentration measurement value” or alternatively “one or more of an element selected from the group consisting of photographed image, a converted image, converted data, and a fine dust concentration measurement value”. To the best understanding of the Examiner and for the purpose of examination the Examiner’s interpretation will be in accordance to the latter/broader aforementioned reasonable claim interpretation. Dependent claim(s) of rejected claim(s) is/are likewise rejected. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 5, and 9-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over newly cited Kim* et al (KR 102233402 B1; hereafter “Kim”) in view of newly cited Tages (US 20140274232 A1; hereafter “Tages”) and in further view of newly cited Lee et al (NPL ASSESSMENT OF SMARTPHONE-BASED TECHNOLOGY FOR REMOTE ENVIRONMENTAL MONITORING AND ITS DEVELOPMENT; hereafter “Lee”). *machine translation provided by Examiner with foreign document and utilized for English citations Regarding independent claim 1, Kim teaches an environment monitoring device (fig. 1, computing environment 10) to be utilized indoors or outdoors (Title “METHOD FOR ESTIMATING CONCENTRATION OF FINE DUST AND APPARATUS FOR EXECUTING THE METHOD”; Abstract “Disclosed are a method for estimating the concentration of fine dust to easily estimate the concentration of fine dust and an apparatus for executing the same”), the environment monitoring device (fig. 1, computing environment 10) comprising: a smart phone (fig. 1, computing device 12) (about middle of page 4 “the computing device 12 may be a mobile device such as a smart phone”), wherein the smart phone (fig. 1, computing device 12) is configured to measure a fine dust concentration around the environment monitoring device (fig. 1, computing environment 10) based on an image photographed with a camera (camera of smart phone / computing device 12) (about middle of page “camera provided in the computing device 12”), and display information on the measured fine dust concentration via a screen (display device 24 of smart computing device 12 as smart phone) of the smart phone (fig. 1, computing device 12) (bottom of page 5 “visually express the fine dust concentration”). Kim does not teach items: 1) a first case and a second case coupled to the first case with the smart phone provided fixed in an inner space between the first case and the second case; and 2) wherein the environment monitoring device is to be installed indoors or outdoors. Regarding item 1): The Examiner takes Official Notice that it is conventional to place a smart phone in an inner space between a first & second case (i.e., the typical consumer phone covers commercially available; Examiner notes this as a broad reasonable interpretation of the claimed smart phone cases). Furthermore, and as factually supporting evidence of the aforementioned assertion, Tages teaches a first case and a second case coupled to the first case with the smart phone provided fixed in an inner space between the first case and the second case (Title “WATERPROOF MOBILE DEVICE CASE”; Abstract; [0030] “case 10 is sized to receive and releasably retain a mobile electronic device such as a smartphone, a tablet computer, and the like. The case 10 includes a cover 12 that is releasably engageable to a base 14 to retain the mobile electronic device within the case 10 in an interior defined by the base 14. The screen protector 16 is releasably engageable and sealable against the base 14. In addition, the screen protector 16 is releasably engageable and sealable against the mobile electronic device when the mobile electronic device is disposed within the base 14”). In view of the above, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine a conventional smartphone casing—as factually supported by Tages—with Kim’s smartphone for the expected purpose of providing protective cover for said smartphone. The Examiner additionally notes that in the particular case of combining Tages’ specific exemplary smartphone cover additional advantages include modularity of the screen protecting portion for replaceability as well as waterproofing. PNG media_image1.png 796 1029 media_image1.png Greyscale Regarding item 2), Lee teaches wherein the environment monitoring device is to be installed outdoors (Title “ASSESSMENT OF SMARTPHONE-BASED TECHNOLOGY FOR REMOTE ENVIRONMENTAL MONITORING AND ITS DEVELOPMENT”; Abstract; page 506 first sentence “smartphone-based environmental monitoring technology”; first full paragraph on page 506 “Most current smartphones (e.g., iPhone, HTC, Samsung Galaxy, Black-Berry) have a micro-electro-mechanical system (MEMS) equipped not only with sensors and chips (including high-resolution camera, global positioning system [GPS], magnetometer, third-generation [3G] chip, Wi-Fi chip), but also with an operating system (OS) such as iOS or Android OS. In addition to receiving high-resolution images, camera location information, and camera orientation information on a real-time basis, smartphones allow a monitoring system to be established at a lower cost than other current technologies”; page 513 “network environment for sending monitoring images and information to the MIMS Web server”; page 514 “Users are able to query monitoring information for each camera by connecting to the MIMS over the Web”; see also figure 1 on page 507). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lee’s smartphone environmental monitoring system (inclusive of support & compartment as well as software and networking for remote monitoring & control) with Kim’s smartphone environment monitoring of fine dust, thereby providing the expected advantages of additional battery backup, encasement, support, optional solar powering when used in an outdoor setting (as opposed to limited time capability of indoor use without solar power input), and convenient software to support transferring information and managing monitored information. With further respect to use indoors/outdoors, it has been held that a preamble is denied the effect of a limitation where the claim is drawn to a structure and the portion of the claim following the preamble is a self--contained description of the structure not depending for completeness upon the introductory clause. See MPEP § 2111.02 and Kropa v. Robie, 88 USPQ 478 (CCPA 1951). Additionally, it has been held that a recitation with respect to the manner in which a claimed apparatus is intended to be employed does not differentiate the claimed apparatus from a prior art apparatus satisfying the claimed structural limitations. See MPEP § 2144(II) and Ex parte Masham, 2 USPQ2d - 164 7 (1987). PNG media_image2.png 773 524 media_image2.png Greyscale Regarding claim 2, which depends on claim 1, Kim as previously modified (see analysis of independent claim; especially as factually supported by Tages) suggests wherein the first case (Tages, fig. 1, cover 12) comprises a first opening (fig. 1, opening 22) provided corresponding to the size of a screen (Tages screen of smartphone; Kim: display device 24 of smart computing device 12 as smart phone) of the smart phone (fig. 1, computing device 12) to expose the screen (display device 24 of smart computing device 12 as smart phone) of the smart phone (fig. 1, computing device 12) to the outside (Tages: [0032] “permit user control of a touch-sensitive display of the mobile electronic device, which may include, for example, a resistive or capacitive touch-sensitive display”); and the second case (Tages, fig. 1, base 14) is coupled to the first case (Tages, fig. 1, cover 12) at the rear of the first case (Tages, fig. 1, cover 12), and comprises a second opening (shown, not labeled; additional obviousness analysis provided) provided corresponding to the camera (Kim’s camera of smart phone / computing device 12) of the smart phone (fig. 1, computing device 12). The Examiner acknowledges that Tages does not explicitly state an opening corresponding to the smartphone camera. However: It does not matter that the feature shown (in this case opening for smartphone camera) is unexplained in the specification. The drawings must be evaluated for what they reasonably disclose and suggest to one of ordinary skill in the art. See MPEP § 2125 and In re Aslanian, 590 F.2d 911, 200 USPQ 500 (CCPA 1979). Furthermore, the Examiner takes Official Notice that it is conventional for smartphone covers to have openings corresponding to the camera(s) of the smartphone. In view of the above, either one of ordinary skill in the art at the time the invention was effectively filed would at once envisaged that secondary reference Tages reasonably shows a camera opening in the cover, or nevertheless, or in the alternative, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine conventional camera openings with the cover thereby permitting the smartphone camera(s) to be used while the cover is on as is routine in the art. Regarding claim 5, which depends on claim 1, Kim teaches wherein the smart phone (fig. 1, computing device 12) comprises (additional obviousness analysis will be provided for distinct modules as opposed to portions of computing device): a photographing module (portion of computing device 12 for photographing) including the camera (camera of smart phone / computing device 12), and allowing the camera (camera of smart phone / computing device 12) to photograph the front (at once so envisage; additional obviousness analysis provided); a conversion module (portion of computing device 12 for image conversion) configured to convert all or a part of a photographed image of the photographing module into an image having different characteristics so as to generate one or more converted images (Abstract “loss-compressing the captured image and storing the compressed image by a preset unit time length”; second to last paragraph of page 5 “the computing device 12 may generate a residual image by calculating a difference in pixel values between pixels of the compressed image and the fine dust removal image. In this case, the residual image reflects the amount of change in the image due to fine dust based on the compressed image”; last paragraph of page 5 “computing device 12 may convert the residual image into a gray scale image and visualize it (ie, visually express the fine dust concentration)”); a fine dust measurement module (portion of computing device 12 for fine dust measurement) configured to input the converted image to a pre-trained deep learning model (machine learning technique CNN) so as to measure a fine dust concentration at a target point (point where photography is being used) of photography (about middle of page 5 “perform a process of generating a fine dust removal image from the compressed image by a machine learning technique. For example, a convolution neural network (CNN) may be used as a machine learning technique”; about middle of page 5 “a convolutional neural network (CNN) includes a feature extraction network (N1) and a reconstruction network (N2). Here, the feature extraction network N1 may be a network that has been trained to generate a delivery amount map based on the input R channel compressed image, G channel compressed image, and B channel compressed image. When the reconstruction network (N2) receives a transmission amount map and a compressed image (an image in which the R channel compressed image, the G channel compressed image, and the B channel compressed image are combined) as inputs, an image from which fine dust has been removed (fine dust removed image) It may be a network that has been trained to output”); and a display module (portion of computing device 12 for displaying with display device 24) configured to display information on the measured fine dust concentration on a screen (display device 24 of smart computing device 12 as smart phone) (Abstract “acquiring an image captured at a predetermined place; loss-compressing the captured image and storing the compressed image by a preset unit time length; generating a fine dust removal image from which fine dust is removed from the compressed image; generating a residual image through a difference between the compressed image and the fine dust removal image; and estimating the concentration of fine dust in the captured image on the basis of the residual image”). The Examiner acknowledges that Kim does not explicitly state separate distinct modules for each of the claimed modules including photographing module, conversion module, fine dust measurement module, and display module. However, the Examiner notes that it has been held that: constructing a formerly integral structure in various elements involves only routine skill in the art, see MPEP § 2144(V)(C), Nerwin v. Erlichman, 168 USPQ 177, 179 (BPAI. 1969), and In re Dulberg, 289 F.2d 522, 523, 129 USPQ 348, 349 (CCPA 1961); and that forming in one piece an article which has formerly been formed in two pieces and put together involves only routine skill in the art, see MPEP § 2144.04(V)(B), Howard v. Detroit Stove Works, 150 U.S. 164 (1893), and In re Larson, 340 F.2d 965, 968, 144 USPQ 347, 349 (CCPA 1965). In the present case, only ordinary skill in the art is required to reallocate control/computation operations to specialized module components of a computer/processor/controller as convenient, and it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to so allocate with the expected benefits that the circuitry for each task can then be more specialized and more easily replaced and/or repaired/updated. With respect to photographing the front, the Examiner notes that front/back is a matter of perspective and/or nomenclature (e.g., back of the phone could be facing towards front of device). Additionally, the Examiner takes Official Notice that it is conventional to have a camera on each side of the phone (i.e., front facing and back facing cameras) and that is likewise conventional/routine to image with either the front/back camera of the phones as convenient. Therefore, either one of ordinary skill in the art at the time the invention was effectively filed would at once envisaged that the smartphone camera is allowed to photograph the front, or nevertheless, or in the alternative, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include conventional front & back facing cameras with Kim’s smart phone and perform routine imaging with said cameras including of the front. Regarding claim 9, which depends on claim 1, Kim teaches wherein the smart phone (fig. 1, computing device 12) comprises: a photographing module (portion of computing device 12 for photographing) including the camera (camera of smart phone / computing device 12), and allowing the camera (camera of smart phone / computing device 12) to photograph the front (at once so envisage; additional obviousness analysis provided); a conversion module (portion of computing device 12 for image conversion) configured to perform a first-type conversion for converting all or a part of a photographed image of the photographing module (portion of computing device 12 for photographing) into an image of different characteristics to generate a converted image, or perform a second-type conversion for converting all or a part of a photographed image into data of different types to generate converted data (Abstract “loss-compressing the captured image and storing the compressed image by a preset unit time length”; second to last paragraph of page 5 “the computing device 12 may generate a residual image by calculating a difference in pixel values between pixels of the compressed image and the fine dust removal image. In this case, the residual image reflects the amount of change in the image due to fine dust based on the compressed image”; last paragraph of page 5 “computing device 12 may convert the residual image into a gray scale image and visualize it (ie, visually express the fine dust concentration)”); and a fine dust measurement module (portion of computing device 12 for fine dust measurement) configured to input the converted image or the converted data to a pre-trained deep learning model (machine learning technique CNN) to measure a fine dust concentration at a target point (point where photography is being used) of photography (about middle of page 5 “perform a process of generating a fine dust removal image from the compressed image by a machine learning technique. For example, a convolution neural network (CNN) may be used as a machine learning technique”; about middle of page 5 “a convolutional neural network (CNN) includes a feature extraction network (N1) and a reconstruction network (N2). Here, the feature extraction network N1 may be a network that has been trained to generate a delivery amount map based on the input R channel compressed image, G channel compressed image, and B channel compressed image. When the reconstruction network (N2) receives a transmission amount map and a compressed image (an image in which the R channel compressed image, the G channel compressed image, and the B channel compressed image are combined) as inputs, an image from which fine dust has been removed (fine dust removed image) It may be a network that has been trained to output”). The Examiner acknowledges that Kim does not explicitly state separate distinct modules for each of the claimed modules including photographing module, conversion module, and fine dust measurement module. However, the Examiner notes that it has been held that: constructing a formerly integral structure in various elements involves only routine skill in the art, see MPEP § 2144(V)(C), Nerwin v. Erlichman, 168 USPQ 177, 179 (BPAI. 1969), and In reDulberg, 289 F.2d 522, 523, 129 USPQ 348, 349 (CCPA 1961); and that forming in one piece an article which has formerly been formed in two pieces and put together involves only routine skill in the art, see MPEP § 2144.04(V)(B), Howard v. Detroit Stove Works, 150 U.S. 164 (1893), and In re Larson, 340 F.2d 965, 968, 144 USPQ 347, 349 (CCPA 1965). In the present case, only ordinary skill in the art is required to reallocate control/computation operations to specialized module components of a computer/processor/controller as convenient, and it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to so allocate with the expected benefits that the circuitry for each task can then be more specialized and more easily replaced and/or repaired/updated. With respect to photographing the front, the Examiner notes that front/back is a matter of perspective and/or nomenclature (e.g., back of the phone could be facing towards front of device). Additionally, the Examiner takes Official Notice that it is conventional to have a camera on each side of the phone (i.e., front facing and back facing cameras) and that is likewise conventional/routine to image with either the front/back camera of the phones as convenient. Therefore, either one of ordinary skill in the art at the time the invention was effectively filed would at once envisaged that the smartphone camera is allowed to photograph the front, or nevertheless, or in the alternative, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include conventional front & back facing cameras with Kim’s smart phone and perform routine imaging with said cameras including of the front. Regarding claim 10, which depends on claim 9, Kim reasonably teaches (at once so envisaged that smart phones communicate information to each other) wherein the environment monitoring device (fig. 1, computing environment 10; as smart phone) is provided to be capable to communicate with one or more other environment monitoring devices (other smart phones) in the vicinity of the environment monitoring, and provided to transmit and receive information from/to other environment monitoring device (other smart phones). Kim does not explicitly teach wherein the environment monitoring device is provided to be able to communicate with one or more other environment monitoring devices installed in the vicinity of the environment monitoring is device, and provided to receive a fine dust information request from other environment monitoring devices. Lee teaches wherein an environment monitoring device (smartphone) (Title ASSESSMENT OF SMARTPHONE-BASED TECHNOLOGY FOR REMOTE ENVIRONMENTAL MONITORING AND ITS DEVELOPMENT”) is provided to be able to communicate with one or more other environment monitoring devices (other smart phones) installed in the vicinity of the environment monitoring device (smart phone), and provided to receive an information request from other environment monitoring devices (other smart phones) (section Smartphone-Based Environment Monitoring App “The smartphone-based environmental monitoring app automatically captures images in accordance with a user-set schedule in the Android OS, or in response to commands transmitted via short message service (SMS), and then sends monitoring information to the MIMS server at the monitoring center (Figure 3). This application was developed for environmental monitoring, using the Android smartphone app.[19,20] We designed the smartphone-based environmental monitoring app to capture images of desired objects in accordance with a preset schedule, or in response to a request sent via SMS, and send them to the MIMS Web server”; bottom of page 515 through top of page 516 “The proposed system not only allows SMS messages to be sent, but also enables system control via SMS. These functions make it possible to send monitoring information or information on emergency situations to the mobile phone of a manager, and also provide a very effective way to remotely control the image-capture schedule and check system status in a wireless network environment”; see Table 2; see fig. 4 for exemplary vicinity location experiment). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lee’s smartphone environmental monitoring system (inclusive of support & compartment as well as software and networking for remote monitoring & control) with Kim’s smartphone environment monitoring of fine dust for the same combination and motivation provided for the independent claim, the Examiner further emphasizing that the Lee’s SMS commands enables convenient interaction with a smartphone app for sending information, receiving information, and controlling the smartphones for environmental monitoring of a plurality of sensor readings including that of the smartphones’ cameras. The Examiner additionally notes with respect to installation in the same vicinity, that the additional smartphones taught by Lee provide additional sensor data which could exemplary be used for stereo imaging (see Lee, section Experimental Configuration), or advantageously for other (Examiner asserted conventional) activities/reasons including validity verification (by comparing data between devices and/or general statistical approaches), providing robustness (in case of failure), and/or providing an expanded mapping of the sensor data. Regarding claim 11, which depends on claim 10, as best understood, Kim as modified (especially by Lee; see analysis of preceding claim 10 and independent claim 1) suggests wherein the environment monitoring device (Kim: fig. 1, computing environment 10) is configured to transmit one or more of a photographed image, a converted image, converted data, and a fine dust concentration measurement value to other environment monitoring device (fig. 1, computing environment 10; other smart phones) in response to the fine dust information (Kim teaches the find dust information; Lee teaches generic information and a plethora of possible sensed information based on type of phone, including information from cameras) request from other environmental monitoring devices (other smart phones; see especially analysis of claim 10 pertaining to Lee and smartphone SMS interactions/controls and associated app). Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over newly cited Kim in view of newly cited Tages, newly cited Lee, and in further view of newly cited Bongsu* et al (KR 20220057026 A; hereafter “Bongsu”) with Applicant** cited Jeong* et al (KR 102326208 B1; hereafter “Jeong”). *machine translation provided by Examiner with each foreign document and utilized for English citations **technically IDS lacked proper assertion and Examiner cited reference to PTO-892 Regarding claim 6, which depends on claim 5, Kim teaches wherein the display module (portion of computing device 12 for displaying with display device 24) is configured to classify a value of the measured fine dust concentration into any one level among a plurality of preset levels, and display visual representation contents on the screen (display device 24 of smart computing device 12 as smart phone) according to the classified level (last paragraph of page 5 “image into a gray scale image and visualize it (ie, visually express the fine dust concentration). Referring to FIG. 5, it can be seen that the higher the fine dust concentration, the darker the gray scale image (that is, the larger the gray scale value)”). Kim does not explicitly state items: 1) a plurality of preset levels; and 2) display pre-stored visual representation contents on the screen according thereto. Regarding item 1), Bongsu teaches a plurality of preset levels (Title “Fine Dust Detecting Solution And System By Computing Saturation Residual Based On AI”; Abstract “fine dust detection solution and system through artificial intelligence (AI)-based saturation residual computation. According to the present invention, the fine dust detection solution comprises: a video acquisition step of acquiring a video indoors or outdoors; a continuous image extraction step of extracting a plurality of continuous still images from the video; a saturation data calculation step of calculating a saturation data set of each of the still images; a residual calculation step of calculating residual data between two consecutive saturation data sets among the plurality of saturation data sets; and a fine dust level calculation step of calculating a level of fine dust on the basis of the residual data. Accordingly, the level of fine dust can be economically measured in various places”; about middle of page 7 “The analysis model may be various artificial intelligence models such as artificial neural network (ANN), machine learning, deep learning, and the like”; towards top of page 5 “fine dust degree calculation step ( S50 ), the fine dust level may be calculated by being divided into classes. In this case, the good or bad of fine dust can be intuitively grasped. A criterion for classifying a class may be determined by applying a binary classification algorithm to a plurality of criterion residual data. The class of fine dust level can be divided into two classes of, for example, 'good' and 'bad', or into four classes of 'good', 'normal', 'bad', and 'very bad'”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Bongsu’s good through bad classification of fine dust levels to Kim’s classifications for the convenience of simplified levels easily understood by the user. Regarding item 2), Jeong teaches displaying visual representation contents on the screen according to the classified level (Title “SYSTEM FOR CONSTRUCTING 3D FINE DUST INFORMATION BASED ON IMAGE ANALYSIS AND METHOD THEREOF”; Abstract “fine dust information based on image analysis, which provides accurate spatial information on fine dust in each region” and “system comprises: a data collection unit collecting weather environment data and image data for each predetermined region; a data preprocessing unit matching the collected image data and weather environment data to group them into a data set; a data learning unit training a predetermined deep learning model on the basis of the grouped data set; and a data estimation unit inputting image data of a region of interest (ROI) in the image data collected in real-time to the trained deep learning model to estimate the concentration of fine dust and generating 3D fine dust spatial information on the basis of the estimated fine dust concentration”; second paragraph page 6 “learning models include, but are not limited to, Otsu, SVM, AlexNet, VGGNet (VGG-16), ResNet50, Inception-v3, Convolusional Neural Network (CNN), Support Vector Regression (SVR), and the like”; page 6 “deep learning learning model”; page 8 first paragraph “the fine dust concentration is assigned may be displayed by applying a predetermined color according to the fine dust concentration. Here, the concentration of fine dust is expressed by color”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Jeong’s predetermined color representation of fine dust concentration with Kim’s classifications for the convenience of simplified colors easily understood by the user. The Examiner exemplary notes the combination of the prior art could suggest to an ordinary artisan a simple good through bad color scheme, such as with greens, yellows, & reds (or even just merely shades of gray as already suggested by Kim) for corresponding levels of good, normal, & bad as routinely commonsensically used for portraying useful discretized hazard information to general users. Claim(s) 7-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over newly cited Kim in view of newly cited Tages, newly cited Lee, newly cited Kim et al (US 20240029457 A1; hereafter “Kim457”), and in further view of Applicant cited Jeong. Regarding claim 7 and claim 8, where claim 7 depends on claim 5 and where claim 8 depends on claim 7, as best understood, Kim teaches wherein the fine dust measurement module (portion of computing device 12 for fine dust measurement) is configured to: input a plurality of converted images of different types to the deep learning model when training the deep learning model to respectively output fine dust concentration prediction values for the plurality of converted images, and train the deep learning model to minimize the difference (at once so envisaged; additional obviousness analysis provided for extent of minimization and accuracy/precision of correctness, Examiner acknowledging that Kim teaches estimation of fine dust concentration) between each fine dust concentration prediction value and a correct answer value (Title “METHOD FOR ESTIMATING CONCENTRATION OF FINE DUST AND APPARATUS FOR EXECUTING THE METHOD”); and extract a prediction value close (silent to closest; Examiner re-acknowledges estimation) to the correct answer value among the fine dust concentration prediction values, and store a type of a converted image corresponding to the extracted prediction value (fourth paragraph from bottom of page 5 “network that has been trained to output”; about middle of page 5 “a convolutional neural network (CNN) includes a feature extraction network (N1) and a reconstruction network (N2). Here, the feature extraction network N1 may be a network that has been trained to generate a delivery amount map based on the input R channel compressed image, G channel compressed image, and B channel compressed image”; reference claim on page 8 “The convolutional neural network, A feature extraction network learned to generate the transfer amount map when the R-channel compressed image, the G-channel compressed image, and the B-channel compressed image are input; And And a reconstruction network trained to output the fine dust removal image when the transmission amount map and the compressed image are input”; top of page 6 “the computing device 12 may calculate a statistic capable of predicting the fine dust concentration based on the residual image. In an exemplary embodiment, the computing device 12 may calculate a variance value or an average value for each pixel of the residual image as a statistic for predicting the fine dust concentration. In addition, the computing device 12 may calculate the entropy of the residual image as a statistic for predicting the concentration of fine dust. The computing device 12 may calculate the entropy (H) of the residual image through Equation 1 below”; about middle of page 6 “the computing device 12 may estimate the concentration of fine dust in the captured image based on the statistics calculated from the residual image. Here, the statistic may be provided to correspond to the fine dust concentration one-to-one” and “correlating a statistic (variance or average value) calculated based on an image taken at a location where the actual fine dust concentration is measured and an actual fine dust concentration measurement value”; see fig. 6) a conversion module (portion of computing device 12 for image conversion) configured to convert all or a part of a photographed image of the photographing module into an image having different characteristics so as to generate one or more converted images (Abstract “loss-compressing the captured image and storing the compressed image by a preset unit time length”; second to last paragraph of page 5 “the computing device 12 may generate a residual image by calculating a difference in pixel values between pixels of the compressed image and the fine dust removal image. In this case, the residual image reflects the amount of change in the image due to fine dust based on the compressed image”; last paragraph of page 5 “computing device 12 may convert the residual image into a gray scale image and visualize it (ie, visually express the fine dust concentration)”); Kim is silent to items: 1a) (claim 8 limitation) acquire one or more of the environment information and climate information at the time of the photography; 1b) (claim 7 limitation) matching the type with one or more of environment information and climate information at the time of the photography; and 1c) (claim 8 limitation) determine into what type of an image to convert all or a part of the photographed image based on one or more of the acquired environment information and climate information, and generate a converted image by converting all or a part of the photographed image into an image of the determined type. Kim does not explicitly state item 2) (claim 7 limitation): minimize the difference between each fine dust concentration prediction value and a correct answer value; and extract a prediction value closest to the correct answer value among the fine dust concentration prediction values. Regarding item 1), Jeong teaches a conversion module (conversion portion of unit 100) wherein the conversion module (conversion portion of unit 100) is configured to acquire one or more of the environment information and climate information at the time of the photography, determine into what type of an image to convert all or a part of the photographed image based on one or more of the acquired environment information and climate information, and generate a converted image by converting all or a part of the photographed image into an image of the determined type, and to store a type of a converted image corresponding to the extracted prediction value matching the type with one or more of environment information and climate information at the time of the photography (Title “SYSTEM FOR CONSTRUCTING 3D FINE DUST INFORMATION BASED ON IMAGE ANALYSIS AND METHOD THEREOF”; Abstract “a data collection unit collecting weather environment data and image data for each predetermined region; a data preprocessing unit matching the collected image data and weather environment data to group them into a data set; a data learning unit training a predetermined deep learning model on the basis of the grouped data set; and a data estimation unit inputting image data of a region of interest (ROI) in the image data collected in real-time to the trained deep learning model to estimate the concentration of fine dust and generating 3D fine dust spatial information on the basis of the estimated fine dust concentration”; second paragraph of page 6 “Examples of such learning models include, but are not limited to, Otsu, SVM, AlexNet, VGGNet (VGG-16), ResNet50, Inception-v3, Convolusional Neural Network (CNN), Support Vector Regression (SVR), and the like”; after middle of page 4 “big data for learning may be divided and constructed according to a predefined category, for example, by month or season, by time, by fine dust concentration, by means, and by means of being divided and constructed”; about middle of page 3 “predetermined deep learning model is trained based on the meteorological environment data obtained through a measuring instrument and image data obtained” and “pre-trained deep learning model to estimate the fine dust concentration”; towards bottom of page 4 “In this case, the big data for learning may be constructed in consideration of weather factors, which may include, for example, sky conditions (sunny, cloudy, cloudy, etc.) and cloudiness. The data preprocessor 200 may set an ROI region in the image data, extract image data of the ROI region, and match the extracted image data of the RIO region and weather environment data according to time and space”; about middle of page 6 “weather environment data is a weather environment factor that can affect the concentration of fine dust, and may include temperature and humidity”; after middle of page 3 “data collection unit 100 may select an ROI target within a predetermined area, and collect weather environment data for each selected ROI. Here, the weather environment data includes, for example, fine dust, temperature, humidity, UVI (Ultraviolet Index), wind speed, wind direction, and cloud amount, but may be largely divided into public data and sensing data. The public data may include fine dust data obtained from the Meteorological Agency, etc., weather data such as wind speed and precipitation, and Geographic Information System (GIS) data. The sensing data may include data acquired by a fine dust measuring device, data acquired using a drone observation device, and data acquired by a weather environment observer. In addition to the data described herein, data obtained through various paths may be applied”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Jeong’s fine dust concentration and environment monitoring device and associated method with Kim’s environment monitoring device and associated method for the expected purpose of improving accuracy/precision of the fine dust measurements by factoring in environmental weather factors into the training and predictions, as well as providing more robust availability of predictions in more inclement weather. With respect to the so-called conversion module being so configured (as opposed to another module), the Examiner again notes that it has been held that: constructing a formerly integral structure in various elements involves only routine skill in the art, and that forming in one piece an article which has formerly been formed in two pieces and put together involves only routine skill in the art, see previously provided citations to the MPEP. In the present case, it is still the Examiner's position that only ordinary skill in the art is required to reallocate control/computation operations to specialized module components of a computer/processor/controller as convenient, and it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to so allocate with the expected benefits that the circuitry for each task can then be more specialized and more easily replaced and/or repaired/updated, further now including that these conversion activities be associated with a so nomenclaturally called conversion module. Regarding item 2) and further pertinent in part to item 1): Kim457 teaches a conversion module (conversion portion of prediction/inference device 100) configured to acquire one or more of the environment information and climate information at the time of the photography, determine into what type of an image to convert all or a part of the photographed image based on one or more of the acquired environment information and climate information (e.g., environmental features), and generate a converted image by converting all or a part of the photographed image into an image of the determined type, and to input a plurality of converted images of different types to the deep learning model when training the deep learning model to respectively output fine dust concentration prediction values for the plurality of converted images, and train the deep learning model to minimize the difference between each fine dust concentration prediction value and a correct answer value; and extract a prediction value closest to the correct answer value among the fine dust concentration prediction values, and store a type of a converted image corresponding to the extracted prediction value (Title “METHOD OF PREDICTING FINE DUST CONCENTRATION AND INFERRING SOURCE BY USING LOCAL PUBLIC DATA AND PREDICTION AND INFERENCE DEVICE”; Abstract; [0006] “observation stations have been installed and have operated to measure fine dust concentrations by regions in real time”; [0007] “predicting a fine dust concentration with high accuracy, determining whether high-concentration fine dust is generated, inferring a fine dust source, and the like, through the development of a prediction system for recognizing whether high-concentration fine dust is generated and taking a preemptive measure”; [0009] “predicting a fine dust concentration and inferring a fine dust source by using local public data and a prediction and inference device to increase the prediction accuracy of a fine dust concentration by converging a convolution neural network (CNN)-based training result of image classification of fine dust generation situations and a recurrent neural network (RNN)-based training result of fine dust concentration prediction”; [0010] “inferring a fine dust source from a training result of image classification of fine dust generation situations such that an emergency reduction policy measure may be efficiently implemented”; [0012] “converting pieces of time-series data collected in consecutive times into an image dataset for training and by training the image dataset for training in a CNN-based image classification model”; [0013] “correcting the inferred fine dust generation grade through a training result of an RNN model”; [0017] “training the image dataset for training in a CNN-based image classification model”; [0049] “deep learning”; [0060] “minimize a loss”; [0060] “fine dust concentration may be affected by seasons, days, times, weather conditions, economic activities, and the like”; [0132] “perform prediction and fine dust source inference with a certain level of accuracy despite an environmental change when inferring a fine dust concentration”; [0046] “temperature, ozone, dust, air quality, etc.”; [0064] “environmental features, such as a wind speed, temperature, humidity, precipitation, cloud amount, and the like, that is, meteorological information, to predict a degree of air pollution”; [0080] “may infer a fine dust source by using the fine dust grade classification result” and “sources may be soil dust, salt from seawater, pollen from plants, and the like. The artificial sources may be emissions generated when fossil fuels, such as coal and petroleum, are burned in boilers or power plants, automobile exhaust fumes, blowing dust from construction sites or the like, raw materials in powder in factories, powdery ingredients in subsidiary material processes, smoke from incineration plants, and the like”; [0014] “visually displaying”; [0015] “numerically displaying the predicted fine dust concentration”; [0074] “fine dust concentration training storage module M2 may store time-series data related to fine dust and an RNN-based training result of which a training dataset is the time-series data related to fine dust”; [0077] “fine dust grade classification training storage module”). It had been held that discovering an optimum value of a result effective variable involves only routine skill in the art, see MPEP § 2144.05(II)(B) and In re Boesch, 617 F.2d 272, 205 USPQ 215 (CCPA 1980). In the present case it is the Examiner's position that optimizing an estimation to a correct answer by minimizing the difference between a prediction and a correct answer value is a commonsense and routine activity. In view of the above, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to optimize Kim’s estimation to a correct answer by minimizing the difference between a prediction and a correct answer value—as factually supported by Kim457’s minimization and correctness with high accuracy—for the commonsense advantage of so increasing accuracy and thus providing better health and safety guidance as well as providing better historical records for appropriate tracking of changes. Claim(s) 12 is/are rejected under 35 U.S.C. 103 as being unpatentable over newly cited Kim in view of newly cited Tages, newly cited Lee, and in further view of newly cited Kang et al (US 20230156316 A1; hereafter “Kang”). Regarding claim 12, which depends on claim 11, as best understood, Kim as modified (especially by Lee, see analysis of preceding claims) broadly reasonably suggests wherein the environment monitoring device (fig. 1, computing environment 10) is configured to transmit one or more of the photographed image, the converted image, the converted data, and the fine dust concentration measurement value (Kim teaches the fine dust concentration) to other environment monitoring devices (fig. 1, computing environment 10; other smartphones) based on a degree of network communication (if no communication, not transmitting; otherwise transmitting) between the environment monitoring device (fig. 1, computing environment 10) and other environment monitoring devices (fig. 1, computing environment 10; other smartphones) of other environment monitoring devices (fig. 1, computing environment 10; other smartphones) (Lee: page 506 first full paragraph “Most current smartphones (e.g., iPhone, HTC, Samsung Galaxy, Black-Berry) have a micro-electro-mechanical system (MEMS) equipped not only with sensors and chips (including high-resolution camera, global positioning system [GPS], magnetometer, third-generation [3G] chip, Wi-Fi chip), but also with an operating system (OS) such as iOS or Android OS. In addition to receiving high-resolution images, camera location information, and camera orientation information on a real-time basis, smartphones allow a monitoring system”; page 513 section RESULTS AND DISCUSSION “the app uses an image-processing method (average) that makes it possible not only to perform image processing on a real-time basis, but also to receive image-processing results on a real-time basis via HTTP or FTP over a wireless network”; bottom of page 514 through top of page 515 “3G and Wi-Fi wireless network chips”; page 516 “the system proposed in this article can be operated with smartphones of any type, and can be used to carry out real-time monitoring wherever a 3G or Wi-Fi wireless network is available”; see Table 3). Kim as modified does not explicitly teach wherein the environment monitoring device is configured to transmit one or more of the photographed image, the converted image, the converted data, and the fine dust concentration measurement value to other environment monitoring devices based on a degree of network communication between the environment monitoring device and other environment monitoring devices and hardware specification information of other environment monitoring devices. Kang teaches wherein an electronic device is configured to transmit one or more of the photographed image, the converted image, the converted data, and a measurement value to other electronic devices based on a degree of network communication between the electronic device and other electronic devices and hardware specification information of other electronic devices (Title “ELECTRONIC DEVICE AND IMAGE TRANSMISSION METHOD BY ELECTRONIC DEVICE”; Abstract “An electronic device includes a first communication module; a second communication module; a sensor module; a camera module configured to capture a photographed image; and a processor”; [0047] “the electronic device 101 in the network environment 100 may communicate with an electronic device 102 via a first network 198 (e.g., a short-range wireless communication network), or an electronic device 104 or a server 108 via a second network 199 (e.g., a long-range wireless communication network)”; [0063]; [0068] “Each of the external electronic devices 102 or 104 may be a device of a same type as, or a different type, from the electronic device 101”; [0079] “communication compatibility information from the first information (e.g., Bluetooth Low Energy (BLE) packet message and/or user registration information) received through the second communication module 293, and may activate the first communication module 291 to perform UWB communication with an external electronic device based on the UWB communication compatibility information”; [0088] “may convert the detected photographed image according to a set transmission method and transmission quality information (e.g., HD/FHD/UHD/4K/8K) to transmit the same to the shared external electronic device. The set transmission method may use the second communication module 293 and/or the fourth communication module 297 to effect the transmissions. For example, the second communication module 293 may use a short-range communication network such as Bluetooth, WiFi direct, or IrDA, and the fourth communication module 297 may use a long-distance network such as a legacy cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network, but is not limited to the above-described example”; [0113] “the first communication module 291 may include an UWB communication module capable of transmitting and/or receiving UWB signals to and/or from an external electronic device using a plurality of antennas for UWB communication”; [0114] “the second communication module 293 may include at least one of a wireless LAN module (not shown) and a short-range communication module (not shown), and may include a near-field communication (NFC) module, a Bluetooth legacy communication module, and/or a BLE communication module as the short-range communication module (not shown)”; [0115] “the third communication module 295 may include a global navigation satellite system (GNSS) communication module”; [0116] “the fourth communication module 297 may include a telecommunication network such as a legacy cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (e.g., LAN or WAN)”; [0119] “the data processor 321 may generate the photographed image as data for transmission according to a specified transmission method and/or transmission quality (e.g., HD/FHD/UHD/4K/8K), and may control to transmit the generated data to the shared external electronic device using at least one communication module among the second communication module (e.g., the second communication module 293 of FIG. 2) to the fourth communication module (e.g., the communication module 297 of FIG. 4)”; [0188] “may convert the photographed image according to a specified transmission method and transmission quality information and automatically transmit the converted image to the shared external electronic device”; [0293] “he electronic devices may include, for example, a portable communication device (e.g., a smartphone)”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Kang’s expanded communication transmission alternatives based on electronic device type communication availabilities & compatibilities and corresponding appropriate control of the of conversion of image data for said various appropriate communication transmission alternatives with Kim’s electronic devices (and Lees environment monitoring networking) thereby providing additional communication capabilities that extend the type of smartphones and other electronic devices available to be connected to the network despite differences in physical component(s) and situational/location network availability of various communication means. Claim(s) 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over newly cited Kim in view of newly cited Tages, newly cited Lee, and in further view of newly cited Shin (US 20200412857 A1; hereafter “Shin”). Regarding claim 14, which depends on claim 1, Kim teaches wherein the environment monitoring device (fig. 1, computing environment 10) is provided to perform data communication (via communication interfaces 26) with the smart phone (fig. 1, computing device 12) (bottom of page 3 “Computing device 12 may also include one or more input/output interfaces 22 and one or more network communication interfaces 26”), is provided to monitor environment-related information, and further comprises a main board (at once so envisaged as main board of computing device 12 for displaying with display device 24; additional obviousness analysis provided) configured to display environment-related information using the smart phone (fig. 1, computing device 12). Kim is silent to other environment-related information other than the fine dust concentration, as well as to associated display of the said other environment-related information. Lee teaches wherein an environment monitoring device is provided to perform data communication with the smart phone, is provided to monitor other environment-related information other than fine dust concentration, and further comprises a main board (similarly so at once envisaged that a smart phone has a main board for displaying information) configured to display other environment-related information using the smart phone (Title “ASSESSMENT OF SMARTPHONE-BASED TECHNOLOGY FOR REMOTE ENVIRONMENTAL MONITORING AND ITS DEVELOPMENT”; page 506 first full paragraph “Most current smartphones (e.g., iPhone, HTC, Samsung Galaxy, Black-Berry) have a micro-electro-mechanical system (MEMS) equipped not only with sensors and chips (including high-resolution camera, global positioning system [GPS], magnetometer, third-generation [3G] chip, Wi-Fi chip),but also with an operating system (OS) such as iOS or Android OS. In addition to receiving high-resolution images, camera location information, and camera orientation information on a real-time basis, smartphones allow a monitoring system to be established at a lower cost than other current technologies. The smartphone is an intelligent terminal in the ubiquitous concept, and offers the advantage of being operable under any conditions in a 3G network environment. In particular, this technology will be able to overcome the existing network limitations wherever a nationwide integrated wireless network has been established (as in Korea). Therefore, smartphones should be very useful for environmental monitoring in a ubiquitous environment”; see fig. 5 showing Screenshots of the smartphone-based environmental monitoring; see Table 2; bottom of page 509 “utilize images sent from a smartphone in a disaster area (e.g., tsunami, storm surge, forest fire, flood), security area, or environmental monitoring area”; page 514 “Users are able to query monitoring information for each camera by connecting to the MIMS over the Web”; bottom of page 514 through top of page 515 “basic smartphone equipment includes not only a digital camera, 3G and Wi-Fi wireless network chips, and a GPS chip, but also a light sensor and a proximity sensor. The 3D position and orientation of the camera can be acquired from the GPS, magnetometer, and accelerometer, and provide the exterior orientation parameters necessary to perform geometric correction. The light sensor can be used to detect forest fires at night.”; towards bottom of page 515 “system is able to receive monitoring information on a real-time basis via a Wi-Fi wireless network, without an additional wireless network card. Of course, in cases B and A, it is also possible to send information on a real-time basis”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lee’s smartphone environmental monitoring system (inclusive of support & compartment as well as software and networking for remote monitoring & control) with Kim’s smartphone environment monitoring of fine dust for the same combination and motivation provided for the independent claim. The Examiner acknowledges that Lee is silent to the smartphone having a main board, wherein the main board is configured to display other environment-related information using the smart phone. However: The Examiner takes Official Notice that a main board is conventional and that smartphones routinely have a main board including being configured to display information. Furthermore, and as supporting factual evidence of the aforementioned assertion, Shin teaches a smartphone comprising a main board configured to display information (Title “PRINTED CIRCUIT BOARD AND ELECTRONIC DEVICE COMPRISING THE SAME”; Abstract; [0031] “an electronic device may be, for example, a smartphone 1100. A mainboard 1110 may be accommodated in the smartphone 1100, and various electronic components 1120 may be physically and/or electrically connected to the mainboard 1110. In addition, other components that may or may not be physically or electrically connected to the printed circuit board 1110, such as a camera module 1130 and/or a speaker 1140, may be accommodated in the mainboard 1110. A portion of the electronic components 1120 may be chip related components, for example, a semiconductor package 1121, but are not limited thereto. The semiconductor package 1121 may be a surface mounted type package, such as a semiconductor chip or a passive component on a package board of a multilayer printed circuit board, but is not limited thereto”; [0028] “the electronic device 1000 includes other components that may or may not be physically or electrically connected to the main board 1010. These other components may include, for example, a camera 1050, an antenna 1060, a display 1070, a battery 1080, an audio codec (not illustrated), a video codec (not illustrated), a power amplifier (not illustrated), a compass (not illustrated), an accelerometer (not illustrated), a gyroscope (not illustrated), a speaker (not illustrated), a mass storage unit (for example, a hard disk drive) (not illustrated), a compact disk (CD) drive (not illustrated), a digital versatile disk (DVD) drive (not illustrated), or the like. However, these other components are not limited thereto, and may also include other components used for various purposes depending on a type of electronic device 1000, or the like”). In view of the above, either one of ordinary skill in the art at the time the invention was effectively filed would at once envisaged that Kim’s smartphone has a mainboard for displaying information on the display device of the smartphone, or nevertheless, or in the alternative, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine a mainboard configured therefore—as exemplary factually supported by Shin’s smartphone mainboard having components such as camera and display connected thereto—for the expected purpose of providing connection between components and control thereof including for display properties such as resolution/refresh as well as the information itself. The Examiner additionally notes that the particular combination with Shin’s smartphone mainboard enables a plethora of additional components to be physically and/or electrically connected thereto (see preceding citations, Examiner exemplary noting camera, display, battery, sensors, etc.) and therefore commonsensically enabling a common convenient physical mounting as well as shared electrical components. The Examiner additionally notes with respect to integrating various components with a main board that it has been held that forming in one piece an article which has formerly been formed in two pieces and put together involves only routine skill in the art, see MPEP § 2144.04(V)(B), Howard v. Detroit Stove Works, 150 U.S. 164 (1893), and In re Larson, 340 F.2d 965, 968, 144 USPQ 347, 349 (CCPA 1965). In the present case, it is the Examiner’s position that only ordinary skill in the art is required to integrally mount together and/or electrically connect together various modules/components to a smartphone mainboard. Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over newly cited Kim in view of newly cited Tages, newly cited Lee, newly cited Shin, and in further view of newly cited Wen* et al (CN 206074839 U; hereafter “Wen”). *machine translation provided by Examiner with foreign document and utilized for English citations Regarding claim 15, which depends on claim 14, Kim as modified suggests a main board (at once so envisaged as main board of computing device 12 for displaying with display device 24; see analysis of claim 14, which includes explicit combination of Shin’s smartphone mainboard) and a power supply module (at once so envisaged that smartphone has main board with a power supply module; see analysis of claim 14, which includes explicit combination of Shin’s smartphone mainboard having a battery) configured to supply power to the environment monitoring device (fig. 1, computing environment 10) (Examiner emphasizes that smartphones conventionally have a main board with a power supply module). Kim does not teach items: 2a) a harmful gas measurement module configured to measure a preset harmful gas concentration around the environment monitoring device; 2b) an air quality measurement module configured to measure air quality around the environment monitoring device; 2c) a fire detection module configured to detect a fire around the environment monitoring device; 2d) a weather measurement module configured to measure a preset weather factor around the environment monitoring device; and 2e) wherein these modules are comprised by the main board. Regarding item 2c) and pertinent to item 2d), Lee teaches a smartphone comprising a fire detection module configured to detect a fire around the environment monitoring device, and is suggestive of a weather measurement module configured to measure weather factors around the environment monitoring device (Abstract “In particular, the need for remote monitoring is increasing in response to climatic disasters, such as flooding, storms, and rising tides caused by global warming. We developed a smartphone-based environmental monitoring system that enables remote monitoring in any place and at any time”; Introduction “remote monitoring to respond to meteorological disasters due to global warming, including flooding caused by storms and storm surges. Natural disasters such as earthquakes, storm surges, and volcanic eruptions create serious problems, and remote monitoring allows such disasters to be forecasted and verified as quickly as possible”; page 6 Section Stereo Rectification “utilize images sent from a smartphone in a disaster area (e.g., tsunami, storm surge, forest fire, flood)”; bottom of page 514 through top of page 515 “The light sensor can be used to detect forest fires at night”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Lee’s smartphone environmental monitoring system with Kim’s smartphone environment monitoring of fine dust for the same combination and motivation provided for the independent claim, the Examiner further emphasizing that detection of fire and inclement weather for forecasting and/or verifying increases emergency response thereto and thus can be utilized for increasing safety and/or reducing damage. Regarding item 2), Wen teaches a main board (circuit board) configured to display other environment-related information using the smart phone (Title “Handheld Network Station”; Abstract “handheld network station, comprising a semi-open shell, an intelligent mobile phone and a circuit board, wherein the front side surface of the intelligent phone embedded in the semi-open shell; the circuit board is arranged between the side face of the intelligent mobile phone and the semi-open shell in. the circuit board is respectively set with a solar total radiation sensor, a temperature and humidity sensor, a wind speed sensor, a PM2.5 detector, benzene detector, formaldehyde detector, used for receiving and processing the data of the CPU, a Bluetooth communication module”; Section Technical Field “portable weather station, especially a handheld network station suitable for forest fire monitoring, outdoor sports, weather environment science meteorological research”; page 4, last paragraph before reference claims “displayed on the screen of the mobile phone together with the weather and air quality data”; Examiner further notes that 2.5μm Particulate Matter includes fine dust) comprising: a harmful gas measurement module configured to measure a preset harmful gas concentration around the environment monitoring device (Abstract “benzene detector, formaldehyde detector”; Section Invention Content “measure the formaldehyde content, benzene content”); an air quality measurement module configured to measure air quality around the environment monitoring device (about middle of page 3 “air quality parameter”; Section Invention Content “measure the formaldehyde content, benzene content, PM2.5 air quality”; page 4, last paragraph before reference claims “measures air quality”); a fire detection module configured to detect a fire around the environment monitoring device (Section Technical Field “suitable for forest fire monitoring”; about middle of page 3 “integration and monitoring weather data of a district. it can be widely applied to forest fire”); a weather measurement module configured to measure a preset weather factor around the environment monitoring device (Section Technical Field “portable weather station” and “weather environment science meteorological research”; top of page 2 “traditional weather information, that is, the temperature and humidity sensor, a wind speed sensor, an air pressure sensor, a solar radiation sensor for monitoring the temperature, humidity, wind speed, pressure, solar radiation value”; about middle of page 2, bottom of page 3, & last paragraph before reference claims “total solar radiation sensor, a temperature and humidity sensor, a wind speed sensor”; Section Invention Content, about middle of page 3 “measure the environment temperature, humidity, wind speed, solar radiation value”). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine Wen’s additional measurement modules and associated method with Kim’s smartphone thereby providing the expected benefits of further increasing detection of fire, weather, harmful gases, & air quality, and thus providing a more complete environmental analysis which can be utilized to increase the safety and quality of (especially human) life as well as providing utility in forecasting and/or verifying unhealthy/dangerous situations which should elicit emergency response thereto and thus can also be utilized for reducing damage/harm. The Examiner again notes with respect to integrating various components with a main board that it has been held that forming in one piece an article which has formerly been formed in two pieces and put together involves only routine skill in the art, see MPEP § 2144.04(V)(B), Howard v. Detroit Stove Works, 150 U.S. 164 (1893), and In re Larson, 340 F.2d 965, 968, 144 USPQ 347, 349 (CCPA 1965). In the present case, it is still the Examiner’s position that only ordinary skill in the art is required to integrally mount together and/or electrically connect together various modules/components to a smartphone mainboard. Allowable Subject Matter Claim(s) 3-4 is/are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim(s) 13 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. When this application is finally acted upon and allowed (i.e., the Notice of Allowance), the Examiner will determine, at the same time, whether the reasons why the application is being allowed are sufficiently evident from the record; see MPEP § 1302.14(I). With regards to claim 13, upon Applicant’s amendment to overcome the rejections and objections raised by the Examiner and upon the Examiner’s better understanding of the invention a comparison of the prior art to the claims will again be made. Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant's disclosure. Applicant is invited to review PTO form 892 accompanying this Office Action listing Prior Art relevant to the instant invention cited by the Examiner. Examiner interviews are available via telephone and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. Any inquiry concerning this communication or earlier communications from the Examiner should be directed to DAVID L SINGER whose telephone number is 303-297-4317. The Examiner can normally be reached Monday - Friday 8:00 am - 6:00pm CT, EXCEPT alternating Friday. If attempts to reach the Examiner by telephone are unsuccessful, the Examiner’s supervisor, John Breene can be reached on 571-272-4107. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID L SINGER/Primary Examiner, Art Unit 2855 24JAN2026
Read full office action

Prosecution Timeline

Dec 27, 2023
Application Filed
Jan 24, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12553914
CONTAINER TRANSFER METHOD AND CONTAINER TRANSFER APPARATUS WITH CLOSEABLE HOLDER
2y 5m to grant Granted Feb 17, 2026
Patent 12553769
POSITIONING METHOD OF ELECTRIC POLE AND ESTIMATING METHOD OF THE STATE OF OVERHEAD OPTICAL FIBER CABLE
2y 5m to grant Granted Feb 17, 2026
Patent 12492934
DEVICE FOR MEASURING A PARAMETER INDICATIVE OF THE ROTATIONAL SPEED OF A COMPONENT
2y 5m to grant Granted Dec 09, 2025
Patent 12493008
ABNORMALITY DETECTING DEVICE FOR CONDUCTIVE PARTICLES IN A LUBRICANT AND MECHANICAL DEVICE
2y 5m to grant Granted Dec 09, 2025
Patent 12487105
REVERSIBLY MAGNETICALLY CLOSEABLE SENSOR HOUSING
2y 5m to grant Granted Dec 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+43.8%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month